How to stop Pengine self-destruction? (destruct parameter is False)

I am using SWI-Prolog 8.0.2 on Ubuntu Linux 18.04.

I do have the destruct parameter that is passed to the create command set to false, so the Pengine instance does persist after a query. But if I don’t make calls against the Pengine server for a certain number of minutes, I get a “not exist” error the next time I make a call with the original unique ID for the engine shown in the response payload from the Pengines server… It appears that the Pengine instance still self-destructs after a certain length of time without activity. I haven’t been able to figure out the exact time value yet.

How do I stop this? My whole intent is to leave the Pengine instance up “forever”. When I create the Pengine instance, I load a large number of predicates and do other initialization work that is time consuming that I don’t want to force the user to wait through with each transaction. So for each user transaction I only save/load their user data. The rest of the code that is global for all users remains resident. Unfortunately, this time-based self-destruct is defeating that strategy.

I looked again through the parameters again for the create call and I don’t see anything relevant:

http://www.swi-prolog.org/pldoc/doc_for?object=manual

Also, can someone tell me what the Pengine ping command is typically used for? Is it just for checking to see if your Pengine instance has died so you can relaunch it? Or are there are uses? I tried searching for pengine_ping in the manual but I don’t see it in the list of Pengine predicates when I enter pengine_ in the search box.

I think the parameter is called destroy, no? Any Pengine is subject to tmeouts. There are two: the time to come up with an answer and the time to wait for the client. I don’t recall their names by heart, but it shouldn’t be hard to find.

SWISH uses ping to show resource usage to the user and tell the user all is still working fine.

1 Like

Hi Jan. Yes that’s a typo. I double-checked my Javascript code and I’m using the correct parameter name destroy and not destruct as I put in my post:

				// Create a new Pengines object for our usage.
				self.pengine = new pengines_lib.Pengine({
						server: self.pengineEndpoint,
						application: 'inferx',
						// WARNING!: If we don't set the "destroy" property explicitly to FALSE, then
						//	the Pengine will self-destruct after the first query is made to it since the
						//	the default value for that property is TRUE.
						destroy: false,
						// The initial query tells the Pengine instance to initialize the user data area.
						// ask: 'initialize_user_data()',
						onabort: self.onQueryAbort_in_promise,
						oncreate: self.onPEngineCreate_in_promise,
						ondestroy: self.onPEngineDestroy_in_promise,
						onfailure:  self.onQueryFailure_in_promise,
						onerror: self.onQueryError_in_promise,
						onprompt: self.onQueryPrompt_in_promise,
						onstop: self.onQueryStop_in_promise,
						onsuccess: self.onQuerySuccess_in_promise
				});

I also know this because before I started using the destroy parameter the Pengine instance destroyed itself right after the first query. Now it is happening after several minutes of inactivity as in not making any calls to the Pengine. If there is a way to keep the Pengine instance “forever”, that is until I explicitly call the Pengine object destroy() method, I’d really like to know.

Something like this:

?- set_setting(inferx:idle_time, 1 000 000).

You cannot disable the timeout, but I guess making it ridiculously long is good enough. You cannot do that from the client as it is there to limit server resources.

1 Like

@jan I think I may have a significant misconception about PEngines.

When I first started using it, I assumed that each Pengines instance had its own memory space. That is, it was a completely separate SWI-Prolog instance with its own memory space and Prolog database. But I believe now that’s a PEngine instance is just a lightweight thread and that memory and the database are shared?

I say this because when I start my Pengines server, I load up a bunch of code as part of my PEngines apps run file. It’s obvious that the predicates I loaded at server startup are available from my client Node.JS code. This indicates that the code that is loaded by the launch code is shared by all PEngines thread the database is too.

So my previous comments about not wanting to load a bunch of code with each user interaction are in error. If creating a Pengines instance is just the creation of a lightweight thread on Linux, and that all the code I loaded on the Pengines is static and not volatile, then I should be fine with creating a PEngine instance with each user interaction, does that sound right? If I’m right, then on Linux, creating a PEngine instance should take only a few milliseconds?

If so, then I’ll just create an instance with each Node.JS request (each user interaction), and then I don’t have to bother with keeping the PEngine instance “permanently” resident.

Also, if what I now say is correct, then I assume that the Prolog database (assert/retract) is shared by all PEngines instance? If that is true, then I assume that I need to use mutexes if a PEngines instance modifies the same predicates when doing an assert or retract? Are mutexes necessary if each PEngines instance does not modify predicates in the database that have the same name? That is, I assert/retract predicates in the database whose names are unique to each user and therefore would never be modified at the same time from different threads?

Yes, a Pengine is a thread that operates on a temporary module. That it where it loads src_text. If you assert, data goes there too, so other Pengines do not see your data. Creation is indeed quite cheap, mostly depending how much source code you load into it.

If you want to share data or preserve data longer than the Pengine lives you need to load code into the server that is accessible from the pengine application and manages this data. You must declare this API as safe and you must make sure it is safe and also thread-safe as it will indeed be called concurrently.

If you use Pengines for agent modelling it may be a good idea to the temporary model to store the status of the agent and use the idle limit to keep it alive.

2 Likes

Excellent. My intention is to load/save the per-user data from a standard database (E.g. - PostgreSQL, etc.) I will pass that data in with the src_text parameter when I query PEngines. After the query completes, I will execute a listing(user_data(X)) query to gather up all the per-user data and thereby get the current “state” for the user, save it to the database so it can be passed back in again with the src_text parameter during the query that occurs with the next interaction with the user.

Does that sound like a reasonable strategy to you?