Does anyone care about protobufs?

Yes, it would. That is a case of Non-Byzantine failure: late or lost (e.g. infinitely late) messages. For a real application you do have to guard against that.

However, this isn’t the internet. We know for sure that the service exists because tipc_service_exists/1 told us so. TIPC has message-queuing and QOS, so the probability of this happening is near-zero.

For the server side, the most likely issue is going to be having an exception thrown at you while being blocked over a long period on the tipc_receive/4.

For the client, you never reuse the socket instance. TIPC sockets are cheap to create and TIPC port-ids are globally unique over a very-long period of time.

The idea behind the operator ‘~>’ comes from the leads-to operator of Owicki and Lamport, 1982.

See: eventually_implies.pl (871 Bytes)

%  Described practically:
%
%    P ~> Q, declares that if P is true, then Q must be true, now or at
%    some point in the future.

CAUTION: It isn’t without its hazards though. It uses setup_call_cleanup/3, which is atomic. If P should perform some operation that could block the process, then you could have some real trouble on your hands.

Predicate Q will be triggered in one of three circumstances:

  1. backtracking on failure into it
  2. receipt of an uncaught exception, or
  3. cutting of choice-points

Short of SWIPL crashing, it’s practically impossible to leak the live socket. That would be disastrous.

These ideas were all being worked out at the same time. The TIPC socket adaptation layer was written first, then (~>)/2, was written later, after I discovered the Owicki and Lamport paper.

SWIPL also has a call_cleanup/2. Atomicity of setup_call_cleanup/3 provides that there can be no “window of vulnerability” between P and Q, where someone could throw an exception after setup P, but before the clean-up Q is established, that could cause the fixed-resource to be leaked and not know it..

Yes, I must guarantee that it gets closed under all normal and off-normal circumstances, even after the predicate that set it up moves on and calls other predicates, such as process_request/1.

Technically, ‘~>’, is non-deterministic. If P succeeds, then ‘~>’ succeeds once, but it always fails on backtracking, thus triggering the cleanup.

If the server socket gets leaked, it’s still bound to a service for as long as the SWIPL instance is alive. TIPC is free to send traffic to the dead socket. Then you’re in trouble.

But how? Consider the following:

:- use_module(library(time)).

%
%  tipc_rendezvous/4 is nondet.
%
%  Message Pattern is Partially Synchronous.
%
%  Send an inquiry to one or more servers, then balk for a time and wait
%  for replies to arrive. Each reply that   is collected is unified with
%  DownMsg and the predicate succeeds. On  expiration of an asynchronous
%  alarm, we send a "$timeout$"  message   to  ourself.  This allows any
%  messages that are already in  the   queue  to remain undisturbed. The
%  timeout is intercepted interally. If/when  encountered, the predicate
%  cuts choice-points and fails.
%
%  If the caller cuts choice-points before   the alarm expires, then the
%  alarm is cancelled and removed, as is the socket instance itself. Any
%  unprocessed messages in  the  socket's   message  queue  are silently
%  discarded. This allows the caller to terminate the transaction early,
%  if desired.
%
%  In the world of TIPC, it's *not* the sender's fault that there are no
%  listeners.  A  well-formed  tipc_send/4  always   succeeds.  For  the
%  listener, Temporal Liveness and Safety   properties  are satisfied in
%  that there is at least one "sender" that knows that you're blocked on
%  the tipc_receive/4. Eventually, you  are   guaranteed  to  receive at
%  least one message -- from yourself.

rendezvous_timeout(S) :-
    tipc_get_name(S, SID),              % get our own port-id
    tipc_setopt(S, importance(high)),   % in case we're being spammed
    tipc_send(S, "$timeout$", SID, []). % send the timeout to ourself

tipc_rendezvous(Addr, UpMsg, DownMsg, From) :-
    tipc_socket(S, dgram)
       ~> tipc_close_socket(S),

     alarm(0.250, rendezvous_timeout(S), AID, [remove(false)])
       ~> remove_alarm(AID),

     tipc_send(S, UpMsg, Addr, []),

     repeat,

       tipc_receive(S, Tmp, From, [as(codes)]),

       (   Tmp == "$timeout$" -> (!, fail); DownMsg = Tmp) .

As I said previously, the probability of a lost message is near-zero in the normal (most probable) use-case, BUT the probability of such a condition occurring is admittedly non-zero. But let’s suppose that we are barraged with responses. This becomes Queueing Problem. If the mean service-time of my listener is such that it cannot keep up with the arrival rate of replies from the server(s), then my socket is eventually going to go into “congestion.” QOS policy takes over.

After the queue size reaches about 2,000 entries, TIPC is going to start discarding replies. The timeout is implemented by injecting the timeout message into the message queue, but if the socket is in congestion, then there is a considerable prospect that that message too, might be discarded, and then we’d be stuck. So, we use the QOS capability of TIPC to set the importance (priority) of the timeout message to be higher than the regular traffic. AFAIK, this message is not expedited in the sense that it is received out of order, but it will not be discarded. You will get the timeout, even while being spammed. The Safety property is satisfied.

It’s because they both were written by the same people: Swedish Telephone Giant, L.M. Ericsson

Error or deficiency, it is what it is. TIPC Sockets for SWIPL is an adaptation layer (address family) for Berkeley Sockets.

This is design intent: We set a hard deadline for replies and we enforce that deadline. Anything arriving before the deadline is timely. And too late is as good as never.

Perhaps “balking” is the wrong word. Booch calls it a “synchronous message with timeout” (OOD w/Apps, p. 127). Bruce Powel Douglas calls it a “timed waiting rendezvous”, (Doing Hard Time, p. 542)

If the timeout is part of the receive, then we’d get a new timeout for each iteration We can’t make Progress if replies are trickling in, or if we’re being spammed. We’ll never terminate.

The TIPC source is in your favorite Linux source distro. It’s a kernel module.

See: TIPC Protocol

Come to think of it, TIPC Sockets is fully implemented at the Linux API level. And I do have (non-Prolog) implementations elsewhere that use the poll/recv pattern to implement OS based timeouts on connectionless sockets. Predicate ‘wait_for_input/3’ however, only seems to support connection-oriented file-streams.

No, it does not. It does however provide a facade to TIPC’s Name Service at name(1,1,0) canonically expressed as {1,1}. It’s always present. This is a whole new subject unto itself. It provides surveillance over everything that’s happening on the cluster, even on nodes that are not your own!!.

Predicates:

  1. tipc_service_exists/1 is semidet.
  2. tipc_service_exists/2 is semidet (after timeout)
  3. tipc_service_probe/1 is semidet.
  4. tipc_service_probe/2 is nondet.
  5. tipc_service_port_monitor/2 is det. It monitors a list of TIPC addresses, and can call a specified goal, on change in server status . Again, compare with Erlang’s supervisor/worker pattern
  6. tipc_service_port_monitor/3 is det (after timeout) Timeout can be ‘infinite’ or it can fork itself into a detached thread.

It’s all documented.

Not at all. Transparent Interprocess Communications (TIPC) is a LAN protocol. It is not IP based and therefore is not routeable over IP networks. That said, I believe that newer versions can provide bearer-channels and are based on UDP, which is routeable. There is no concept of the IP Address. And just about every other assumption that you may have, is invalid, based on an understanding of TCP/IP.

That’s a good idea.

Hello Peter, I’d say that enhancing features that extends the use of some “standard” protocols and make them be optimized on Prolog is a nice project. On my side i never really liked all those blabla things like JSON that add tons of text around things that are basically fields and Protobuf on that looks much more concise. As for the readability, maybe it could be solved simply with a module adding comments to field numbers or an XPCE tool ? The more i look at XPCE the more i find it so powerful but harsh to start using as it is lacking tons of examples to really understand all of its “tricks” :-p

My current design is to translate between a “native-ish” Prolog structure and protobufs – the Prolog structures would be similar to those from json_read_dict/3 and json_write_dict/3.

(The code for reading (parsing) protobufs is mostly finished but I’m still working on test cases – there are quite a few permutations that need to be validated; once the reading code is finished, the writing code should be quite straightforward)

If you want to use XPCE (or Tau-Prolog for that matter) as a front-end, you would have a choice of JSON or protobufs, depending on what format the backends expect (e.g., Google cloud and maybe other services prefer protobufs) Protobufs provide more structure than JSON, and slightly richer data types; they’re also more compact, although that’s probably not a major issue (and protobufs aren’t human-readable, although I don’t think that’s significant when debugging).

Frankly, after looking at the protobuf documentation i find it much more compact and efficient than JSON blabla. As far as it is structured what matters is to know the fields and have a reliable system. Good luck for your programming :slight_smile:

video to be seen at 10:20 that includes a quotation of protobuf :slight_smile:

The .proto language allows specifying a “service” (presumably using gRPC) - it’s pretty straightforward: just a service name with input and output message types. It would probably be a good thing to generate code that uses SWI-Prolog’s HTTP client for this, but I don’t have any infrastructure for testing it. Also, I should get the output protobuf code working (derived from the .proto specification) and I don’t even have input working yet. :wink:

Hi Peter, once your protobuf module will be usable, i’ll try to look back at something i did in 2018 = parsing the UNdata = for example : UNData FR … till now i get the DOM, parse some data inside and just print on screen as the day i looked at it was because the SWI-Prolog XPATH page was making me lose hair while trying to understand it until i do a real example … I guess protobuf would be an interesting “glue” to structure data and use it to build a local full dataset from extracted UN country data pages. For now, i am looking at how to extract and structure financial data from euronext.com :slight_smile: Typically sites where you have tons of data lost in the middle of nowhere … Both “work” the same way = on UN country data you start with the ISO-3166 country codes and in finance with ISIN codes, then you can grab and structure data.

But as far as i read from your link, not done in SWI Prolog …

Status update: I’ve written the code for the protoc plugin and also the code for parsing input streams using the “_pb.pl” file generated by protoc.
I’ve also added quite a few test cases (and fixed the bugs that they uncovered), but I still need to write more test cases. Hopefully, in a “few days” I can send a PR.
Next step: generate protobuf stream from a data structure (using the “_pb.pl”) - should be fairly easy to write.
I still have quite a bit of code cleanup to do, but I think the code in its current state is “good enough”, once it passes all the test cases.

If anyone wants to contribute more test cases: GitHub - kamahen/contrib-protobufs at segment
The main test cases are in the “interop” directory, which also shows what the data structures look like.
The “packed” test case isn’t quite working and I’m still working on the “golden_message” test case.

2 Likes

If anyone wants to review my changes, they’re here:

I plan to integrate these changes into the next dev release of swipl.

I’ve added a swipl plugin for protoc. It should be available in the next release of SWI-Prolog. There are still some rough edges (mainly with .proto files that import .proto files), but protoc --swipl_out=... should be useable now. Please report any problems!

2 Likes

I’m done with enhancing the protobufs code. Improvements to the documentation would be much appreciated. For suggestions or bugs:

1 Like