Steadfastness of a remote answer collector, Web Prolog style

The example uses pengin_spawn/2, pengine_ask/2, pengine_next/1 and receive/1. Where and how is a pengine/webprolog deallocated? What would happen if I do, i.e. ask main/1 with a closed and shorter list. Would main/1 be steadfast and deallocate a pengine/webprolog at the same time?

Welcome to SWI Web Prolog!

?- main([A,B]).

Are there some setup_call_cleanup/3 patterns possible/impossible or necessary/unnecessary for pengines/webprolog? setup_call_cleanup/3 has been introduced in many Prolog systems, since it helps accessing or modifying resources.

There is even a draft standard:
https://www.complang.tuwien.ac.at/ulrich/iso-prolog/cleanup

The typical pattern to access a resource such as a server, is the following programming pattern, which has the advantage that it can be used via backtracking and also works nicely with surrounding cut:

fetch(X) :-
    setup_call_cleanup(server_open(Y),
         server_fetch(Y,X),
         server_close(Y)).

It might not be the prefered programming pattern if between backtracking or cutting away the fetch/1 call, a lot of time can pass, and therefore the server connection would be open for a long time.

But nevertheless, it assures that resources are freed. The cleanup part is also called in case an exception happens in the call part or in the continuation.

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

I’m not sure I can answer all your questions, but here goes:

Allocation and deallocation of pengines and other actors is performed by (what I refer to as) the actor manager. The actor manager is briefly described in the Erlang’19 paper.

Well, let’s see what happens, but let’s first pass the monitor option so that we can check the end result:

main(List) :-
    pengine_spawn(Pid, [
        node('http://localhost:3060'),
        monitor(true),
        src_text("
            q(X) :- p(X).
            p(a). p(b). p(c).
        ")
    ]),
    pengine_ask(Pid, q(_X)),
    collect_and_print(Pid, List).
    
collect_and_print(Pid, List) :-
    receive({
        success(Pid, [X], false) -> 
            List = [X] ;
        success(Pid, [X], true) ->
            List = [X|Rest],
            pengine_next(Pid),
            collect_and_print(Pid, Rest)
    }).

A test run:

Welcome to SWI Web Prolog!

?- main([A,B]).
false.

?- flush.
Shell got down('61931180'@'http://localhost:3060',exit)
true.

?- 

Whether this told you everything you want to know, I’m not sure.

I’m sure some are possible, but I don’t know if there are any impossible ones. Any ideas?

BTW, for answers to such detailed questions, you may want to download and test the PoC yourself? Bug reports (especially for design bugs) are most welcome!

Yes, that’s the one, mentioned in my first post in the first topic related to Web Prolog. But let me provide you with a collection of links, so that you don’t have to dig around in order to “RTFM”:

  1. The “Intro to Web Prolog for Erlangers” paper.

  2. A first draft of a manual.

  3. The longer (> 170 pages) manuscript “Web Prolog and the programmable Prolog Web”.

  4. A chapter on Web Prolog as a scripting language for SCXML.

  5. A chapter on marketing ideas for Prolog.

The chapters 4) and 5) are extracted from the next draft of “Web Prolog and the programmable Prolog Web” that I’m working on. This draft is too messy to distribute at this point in time, but will be announced when I think it’s ready for public consumption.


So equipped with this, let's move on to your other questions:

No, it has nothing to do with links. Unfortunately, I noticed that documentation for flush/0 is missing from the manual. But it’s very simple. Here’s a quote from the intro paper:

Calling the utility predicate flush/0 - also borrowed from Erlang - allowed us to inspect the content of the top-level mailbox.

It means that the shell (or rather the pengine to which the shell is attached) received a message informing you that the pengine which was spawned no longer exists. Erlang has such messages too.

flush/0 is implemented like so:

flush :-
    receive({
        Message -> 
            format("Shell got ~q~n",[Message]),
            flush
    },[ 
        timeout(0)
    ]).

No, as you can see, it’s much simpler than that.

Yes, Web Prolog has exit/2 too, with the same meaning. There’s pengine_exit/2 as well, which is simply an alias for exit/2.

pengine_exit/2 means the same as pengine_destroy/2 in library(pengines). The idea behind the name change is to stick to Erlang-ish naming, as long as it makes sense.


So let’s replace “xxx” with “exit” (and add another argument) in your program, like so:

main(List) :-
    setup_call_cleanup(
        pengine_spawn(Pid, [
        node('http://localhost:3060'),
        monitor(true),
        src_text("
              q(X) :- p(X).
              p(a). p(b). p(c).
          ")
       ]),
      (pengine_ask(Pid, q(_X)),
      collect_in_list(Pid, List)),
      pengine_exit(Pid, hello)).
    
collect_in_list(Pid, List) :-
    receive({
        success(Pid, [X], false) -> 
            List = [X] ;
        success(Pid, [X], true) ->
            List = [X|Rest],
            pengine_next(Pid),
            collect_in_list(Pid, Rest)
    }).

This works, for both ?-main(List) and ?-main([A,B]), just as before. However, I don’t think the use of setup_call_cleanup/3 makes any sense here, since pengine_exit/2 will always run, but won’t do anything at all since the pid at this point in time points to a pengine which is already gone.

The calls in the second and third argument to setup_call_cleanup/3 are deterministic and they don’t fail or throw exceptions, not even if the spawned pengine for some reason is already dead or if its creation didn’t succeed in the first place. There is a pid, but it doesn’t point to an active pengine. This too is Erlang-ish behaviour.

The only thing that might happen is if the call to collect_in_list/2 hangs, i.e. if no success messages show up. There can be a network problem, for example.

So what can be done about that? Here’s one way to handle the situation, assuming we want ?-main(List) to always terminate, and the spawned pengine to die:

main(List) :-
    pengine_spawn(Pid, [
        node('http://localhost:3060'),
        monitor(true),
        src_text("
            q(X) :- p(X).
            p(a). p(b). p(c).
        ")
    ]),
    pengine_ask(Pid, q(_X)),
    collect_in_list(Pid, List).
    
collect_in_list(Pid, List) :-
    receive({
        success(Pid, [X], false) -> 
            List = [X] ;
        success(Pid, [X], true) ->
            List = [X|Rest],
            pengine_next(Pid),
            collect_in_list(Pid, Rest)
    }, [
         timeout(2),
         on_timeout(pengine_exit(Pid, hello))
    ]).

Now, if the success messages take too long to arrive, the pengine will be killed. And since we are monitoring it, the following down message will be sent to the mailbox of the calling process.

Shell got down('6131580'@'http://localhost:3060',hello)

Yes, if you pass monitor(false), which is the default, no down message would be sent.

So now you know. But I wouldn’t mind if you RTFM anyway. :wink:

BTW, given the above explanations, you should be able to understand how the implementation of rpc/2-3 works. Here it is:

rpc(URI, Query) :-
    rpc(URI, Query, []).

rpc(URI, Query, Options) :-
    pengine_spawn(Pid, [
         node(URI),
         exit(true),
         monitor(false)
       | Options
    ]),
    pengine_ask(Pid, Query, Options),
    wait_answer(Query, Pid).

wait_answer(Query, Pid) :-
    receive({
        failure(Pid) -> fail;            
        error(Pid, Exception) -> 
            throw(Exception);                  
        success(Pid, Solutions, true) -> 
            (   member(Query, Solutions)
            ;   pengine_next(Pid), 
                wait_answer(Query, Pid)
            );
        success(Pid, Solutions, false) -> 
            member(Query, Solutions)
    }).

And here’s a test:

?- rpc('http://localhost:3060', q(X), [
       src_text("
            q(X) :- p(X).
            p(a). p(b). p(c).
       ")
   ]).
X = a ;
X = b ;
X = c.

?-

Sure, my first (crappy) implementation used posix threads (thread_create/3 and friends) only (just like library(pengines)), the current PoC uses Paul Tarau’s engines and fewer threads. An implementation in Erlang might use Erlang processes. An implementation in Java might use something else. What? Well, I leave that to you to propose. The point is that there are many way to implement Web Prolog, and I’m not yet sure which way is the best. I’m still in the specification phase, and would prefer to remain there for a while.

I’m not sure where you think setup_call_cleanup/3 would help. Can you elaborate?

exit(true) tells the pengine to die after having run the query to completion, wheras exit(false) means you can keep using it for more querying.

That could and probably should be added, but I still don’t understand how setup_call_cleanup/3 would help, so I would use pengine_exit/2 in combination with a timeout as in the previous example.

If you know the first solution is all you want, I would propose to use once(q(X)) as a goal. :wink:

But hey, I’m aware of the problem, and I’m not sure a 100% satisfying solutions exists for immediately killing (remote) pengines when they are no longer needed. I have a few ideas, and one of the best might be the following. A Web Prolog node can (and should) be equipped with a stateless (think RESTful) HTTP API as well as a WebSocket API. rpc/2-3, as implemented above, has to use the WebSocket API, but rpc/2-3 can also be implemented using HTTP as transport (one might use an option transport to choose between the two implementations). Here’s a somewhat ugly implementation:

rpc(URI, Query, Options) :-
    option(limit(Limit), Options, 1),
    rpc(URI, Query, 0, Limit).
    
rpc(URI, Query, Offset, Limit) :-
    parse_url(URI, Parts),
    format(atom(QueryAtom), "(~p)", [Query]),
    rpc(Query, Offset, Limit, QueryAtom, Parts).
    
rpc(Query, Offset, Limit, QueryAtom, Parts) :-    
    parse_url(ExpandedURI, [ path('/ask'),
                             search([ query=QueryAtom,
                                      offset=Offset,
                                      limit=Limit,
                                      format=prolog
                                    ])
                           | Parts]), 
    setup_call_cleanup(
        http_open(ExpandedURI, Stream, []),
        read(Stream, Answer), 
        close(Stream)),
    wait_answer(Answer, Query, Offset, Limit, QueryAtom, Parts).

wait_answer(error(anonymous, Error), _, _, _, _, _) :-
    throw(Error).
wait_answer(failure(anonymous), _, _, _, _, _) :-
    fail.
wait_answer(success(anonymous, Solutions, false), Query, _, _, _, _) :- 
    !,
    member(Query, Solutions).
wait_answer(success(anonymous, Solutions, true), 
            Query, Offset0, Limit, QueryAtom, Parts) :-
    (   member(Query, Solutions)
    ;   Offset is Offset0 + Limit,
        rpc(Query, Offset, Limit, QueryAtom, Parts)
    ).

To understand how the stateless HTTP is supposed to work, you should first look at section 4.9 in the paper, and if you want to know (much!) more, at section 6.3 - 6.5 in the longer manuscript. The tutorial in the PoC provides a couple of examples.

For an implementation of threaded engines, see https://logtalk.org/manuals/userman/threads.html#threaded-engines

The API was developed in parallel with the SWI-Prolog engines API and after some discussion with Jan Wielemaker. The APIs share a similar interface. For some examples, see https://github.com/LogtalkDotOrg/logtalk3/tree/master/examples/engines

The tbbt example could probably be converted into a fun demo for Web Prolog using two nodes, one for each player.

1 Like

Thanks, both of you! One has to eat too, and that’s what I’m going to do now. :slight_smile: I’ll get back to you tomorrow, or perhaps not until Monday. Have a nice evening!

I’m back! Couldn’t resist…

Let’s see if I can deliver a decent response to each of you before it’s time to hit the sack. :slight_smile:

Jan B, I thought I knew enough about setup_call_cleanup/2 to use it once in a while when it’s appropriate (when opening files, for example), but who knows, I may be missing something. I’m sure it would deserve a place in Web Prolog, but I still don’t get what kind of role it can play in solving the problems we’ve discussed above. I understand that call_with_time_limit/2 might be useful (instead of the solution based on receive/1-2 and timeouts), but you keep talking about setup_call_cleanup/2 all the time, and that makes me confused. I’m happy to make another attempt to understand. :slight_smile:

Paulo, API looks great!. So it does indeed seem like Logtalk might be a suitable implementation language for a Web Prolog node. :wink: And yes, game-based demos are effective, so I’d love to find out if the tbbt example can be rewritten in Web Prolog.

(Now off to say hej to our new Swedish friend Håkan before going to bed…)

(Strange, BTW, I just noticed that there’s a limit on how many answers one may write to the same post in a row. Didn’t know that until know, and that’s why you get my responses to you both in the same post.)

4 posts were merged into an existing topic: Merrits of Erlang style for real-time systems

I agree about the need for nice names, but they’re hard to come up with.

But they are equal, aren’t they, as they both fail (given that there are three solutions)?