Dealing with an asynchronous call, in a synchronized manner


I am currently looking to interface a Prolog code with a peer system. The interaction is completely asynchronous via websockets.

During Prolog processing I want to query the state of an object within the peer system, and then continue continue processing with the state received.

Is there a way to pause prolog processing until an asynchronous interaction completes “under the hood”.


goal(X) :- 
    async_process( X, y_object, Y) ,
    process(X, Y).

Internally, async_process(X, y_object, Y) looks to do the following:

  1. send a websocket message to a peer requesting the value of a y_object using some unique request identifier.
  2. the message is received by the peer which responds with a websocket message that includes the value of y_object
  3. in prolog that message is received with the identifier, and thereby matched with the prior request for Y
  4. Y is bound – somehow to the received value

Step 4 – is unclear to me – can it be done?

Or does this need to be architected differently – ie. goal1 above has to be made into two parts – one that causes the message to be sent, and another part that is called upon response.

Something like this:

goal(X) :-
       async_process( X, y_object, Y, call_back(goal_call_back)) .

goal_call_back(X,Y) :-



The best I can think of for async_process is something like this:

async_process(X,y_object,Y) :-

And some service watching for replies, using some additional logic based on the identifier to find the waiting thread and do thread_send_message(TID, reply(Value)).

You can also go the now popular ( :slight_smile: ) fiber route, run the code in an engine and yield from it after sending the request. The main scheduler will find the reply, identify the engine waiting for it and resume this engine with the reply.

Thank you, Jan.

Since any solution will require dealing with asynchronicity, i am thinking to make the code “data” driven – i.e. to put more data into the payload, so that computations can be done a-contextually.

So, i will not need to deal with threads or engines directly.

Wonder how this will look like in the end :slight_smile:


In the end, I implemented a session at the prolog side in order to reduce the payload to essentially an identifier (and a bit more).

Looks like thats a way to go


i guess, the key saving is no need for an explicit callback.

but i would still need to hold / pass around accross websockets a session id linking to the thread id, probably in the dynamic db. And i guess no need to use call/2 to call back the call back.

is it possible to post a value to a queue from outside prolog …

You mean whether you can access Prolog message queues from C? Currently not. You’d need a call through Prolog.

Hi Jan

My call back based solution ran into asyc problems – when I need to aggregate results – waiting on the results is a problem with all the asychronous “half” websocket calls happening between peers.

I want to try your idea "thread_get_message(reply(Y)) …

But, the problem has a bit changed.

I need to generate a varying number of websocket asynchronous calls (based on a foreach generator), each generating a websocket asynchronous “reply” call – (probably) each in a different thread.

I need thread_get_message(reply(Y) to wait until all websocket reply calls have completed – either all successfully, or at least one failed.

I wonder how this is set up – also perhaps thread wait is an option here … each async call adds something to a session (implemented in a module) and the goal in thead wait counts the number of results to see if all arrived.

hope i am making sense, in my description


actually, perhaps its simple.

I don’t (yet) have a requirement to run all generated computes in parallel - so i can “serialize” each reply – so, your solution should work


First, your suggested pattern works great – in particular when wrapping the thread_get_message into a goal within another module – makes the code really clean.

But, I ran into a problem when trying to use transactions.

Basically, a transaction would wrap the whole code, including the async_process/.3 and if the overall code fails in the end, there should be a rollback.

But, currently, when wrapped into a transaction, the code stops processing – i.e. freezes the debugger and nothing happens without debugger.

Can this be made to work - or are there too many threaded calls interneaved that make the mechanism hang up.