I am devloping a java application that generates prolog files and tests these files against a set of data. I am using SWI-Prolog 9.0.4 and Java jdk-17.0.1.
At each iteration of the process, a prolog file is consulted N times, where N is the number of datapoints in the dataset. During each consultation, the prolog file is further queried 2 times.
At some point, the testing process is not responding (q.hasSolution() in particular). (q is a Query object)
Hard to say. Repeated calls on Prolog from external languages need precautions from building up resources. In the C interface this is done by PL_open_foreign_frame(). I know too little about JPL to know whether this could be a problem. I have some problems understanding
What is a datapoint here? Do new datapoints add to or replace existing ones? I guess “During” must be “after”?
Without more details it is hard to say what could be wrong. You might add a call to statistics/0 after each iteration to get an idea whether resources are building up.
Surely for data, I would not write a file and consult it. At best, this is slow. Writing and parsing is slow and Prolog’s consult aims at program code rather than data and thus there is a lot of overhead and admin. Instead, use assertz/1 and other parts of the dynamic database management.
Hello, thanks for your quick reply.
A datapoint is an entry (record) in a dataset. I am using a background file for new datapoints, that is, before consulting the prolog file, I write one entry in the background file. Yes, “during” means after consulting the prolog file.
The prolog file is developed incrementally, at each iteration of the process I write a new version of the file. I could consider asserting data (facts) instead of writing them in the background file. I will give it a try.
Thanks.
So you create an ever growing file that you reload over and over? That is surely hugely inefficient. It should work though. The reason for failure is probably in how you call Prolog repeatedly from Java (and possibly in JPL). Again, calling statistics/0 on each iteration to see what is happening would be my first step.
Asserting the data may not fix this. Surely it will be many times faster