Does anyone care about protobufs?

EDIT: I was wrong to assume that dicts need to be fully instantiated to work properly. Dicts are actually perfectly capable of dealing with uninstantiated values (keys must be instantiated though).

Original reply:

Could you give a concrete example of some of the problems? I used to think that a pair is not fundamentally nor superficially different from any other term with arity 2.

The issue is not the pair, but the list, and in cases of incomplete instantiation. Suppose you want to assert that the field second in Msg is equal to "abcd", and then you have some predicate afterwards that fails. You’d expect the whole thing to fail but instead you have nontermination.

?- Msg = 'SomeMessage'(Pairs), member(second-"abcd", Pairs), 1=0.
% Nontermination

Of course you can guard against that by restricting the length of the pairs list. But then you run into the problem of duplicates. Suppose some logic in your code asserts that the field second is equal to "abcd" and then somewhere else that the field second is equal to "efgh". The predicate should logically fail but instead it introduces duplicates.

?- Msg = 'SomeMessage'(Pairs), length(Pairs, 5), member(second-"abcd", Pairs), member(second-"efgh", Pairs).
   Msg = 'SomeMessage'([second-"abcd",second-"efgh",_A,_B,_C]), Pairs = [second-"abcd",second-"efgh",_A,_B,_C]
;  ... % More permutations

On the other hand, using custom compound terms avoids both of these issues cleanly and efficiently. Custom compound terms can be used even without being completely instantiated, unlike dicts or association lists. Using custom compound terms becomes possible and desirable in cases when schema is precisely defined and strictly enforced (which doesn’t seem to be the case for Protobufs, sadly).

How come? What is the difference between these 3(4) given that the set of properties is fixed to an x and a y?

  • point(X,Y)
  • point{x:X, y:Y}
  • [x(X), y(Y)] (or [x-X, y-Y])

The first is surely the most space efficient. The drawback is you need predicate point_x/2, etc and it isn’t that easy to say x is absent, null, etc. Logical variables come to mind, but are different than the null notion as has been discussed extensively here before. Dicts take twice the amount of space. They do not need predicates though, can omit fields and provide an access time that is not measurably different as long as the number of keys is not really large. Dicts are an obvious match to JSON, YAML and similar dynamic languages. They are less needed for fully typed data, but still a pretty neat alternative. For the point example I’d be tempted to say point(X,Y) is what we want. As the number of field grows and the order of the fields become less obvious the tables turn quickly. An in-between notation is used in ECLiPSe. They use the SWI-Prolog dict notation, but each dict is mapped at read-time to a compound. So you define point is a struct with an x and y and then point{} is point(_,_), point{x:1} is point(1,_), etc. Neat for situations with a fixed set of attributes!

It took a while for dicts to be picked up, but I see them more and more in source code written for SWI-Prolog :slight_smile:

4 Likes

Isn’t library(record) somewhat similar in spirit?

Not really. The record library automates generating the point_x/2 and related predicates from a single description. The ECLiPSe approach creates a read macro (something that SWI-Prolog doesn’t have) that rewrites Tag{key:Value, …}.

The library(record) approach can almost be done in ISO Prolog (just there is no term expansion so you need intermediate files similar to Logtalk). The ECLiPSe approach requires extending the parser and the SWI-Prolog approach comes with some extensions to the core, though not much as dicts are (still) just compounds to most of the system.

I see. From my point of view, dicts are really great and one should give them a good try before discarding them as an option.

As it turns out, I sit on a throne of lies, and dicts are actually really well-designed. I didn’t know this would work:

?- X = Y{a:A,b:b}, Z = X.a, X = c{a:a,b:B}.
X = c{a:a, b:b},
Y = c,
A = Z, Z = a,
B = b.

I will add corrections to some of my above posts to avoid misleading anyone else.

EDIT: I was also wrong about pairs. Using pairs is actually superior to the custom compound terms I was promoting, and issues with key duplication can be resolved by using select/3. Pairs are superior because pairs allow you to reason about one key-pair at a time, whereas custom compound terms force you to reason about the whole term at once.

As an example, consider a message format that encodes someone’s first and last initials. The first initial is prefixed with the letter F and the last initial is prefixed with the letter L and they can appear in any order. So A. P. can be encoded as either FALP or LPFA. The following DCG can both parse and generate this format to/from Prolog pairs of form [first-'<first initial>', last-'<last initial>'] (note the order is fixed inside Prolog):

initials(Initials) -->
        { Initials = [first-First, last-Last] },
        initial(Initials, Initials2),
        initial(Initials2, []).

initial(Initials, InitialsNoFirst) -->
        { select(first-First, Initials, InitialsNoFirst) },
        "F", [First].
initial(Initials, InitialsNoLast) -->
        { select(last-Last, Initials, InitialsNoLast) },
        "L", [Last].

This correctly works in the general case:

?- phrase(initials(Initials), Characters).
   Characters = ['F',_A,'L',_B], Initials = [first-_A,last-_B]
;  Characters = ['L',_A,'F',_B], Initials = [first-_B,last-_A]
;  false.

And in either direction:

?- phrase(initials(Initials), "LPFA").
   Initials = [first-'A',last-'P']
;  false.
?- phrase(initials(Initials), "FALP").
   Initials = [first-'A',last-'P']
;  false.
?- phrase(initials([first-'A',last-'P']), Characters).
   Characters = "FALP"
;  Characters = "LPFA"
;  false.

It’s not a problem at all. Consider program P1 sending messages of type M to P2. A message M contains fields a, b, c, d, e. Now, P1 adds a new field f. Here are the possibilities:

  1. Program P2 wants to use the new field f, modifies the code accordingly, and rebuilds. Problem solved.
  2. Program P2’s code isn’t changed but the executable is rebuilt. P2 starts receiving the new field f but doesn’t use it at all.
  3. Program P2 doesn’t change anything (doesn’t rebuild the executable). P2 starts receiving the new field f but the protobuf deserializer ignores field f (and, of course, the code doesn’t care because it didn’t use field f anyway). The net result is the same as case #2.

The inverse: where P2 adds support for a new field f but program P1 doesn’t send it has a similar case analysis – “required” fields turned out to be problematic, so they’re not even in proto3 syntax, where all fields are optional. If a field is optional, a program has to be prepared to handle its absence; and it doesn’t matter whether the field is missing because an older version of the protobuf was used or because the sender didn’t fill in that particular data.

And if a field is removed from message M, that’s not a problem either – if P2 didn’t use the field, it doesn’t matter if the deserializer has a provision for the field or not; and if P2 did use the field, it had to be prepared to handle the absence of the field. (And, for languages like C++, if the executable is rebuilt, and the field is referenced, there’ll be a compile-time error; for languages like Python, this needs to be caught in unit tests or by using other tools.)

All I can tell you is that protobufs have been used by tens of thousands of very good programmers at Google over 15+ years and nobody seems to care about reasoning over protobufs, nor about bidirectional transformations. (As for the joke I mentioned: it’s about transforming protobufs in one format to protobufs in another format – for example, protobufs that log advertising “hits” creating protobufs that go into the billing system. In many ways, it’s a lot like old-fashioned business programming in COBOL or PL/I).

(There are some caveats about how Google doesn’t have a problem with protobuf versioning: almost everything is built from a unified codebase with extensive unit tests, so that if a change is pushed, that triggers unit tests for all potentially affected programs. Executables are statically linked, so there’s no problem with old versions of shared objects creating ABI inconsistencies. Also, programmers are careful to only add fields to protobufs (enforced by the standard code review process) – if a field is removed, it typically goes through a cycle where it’s marked as “deprecated”, code search tools are used to find all uses of the field, and only then is the field removed.)

2 Likes

I don’t think that would work with protobufs because messages can contain messages, so there are an infinite number of possibilities (e.g., the “tree” example). Instead, I’m proposing just interpreting the dict, similar to how JSON objects are currently represented as a dict. (For output, the dict tags are optional because they can be figured out from the .proto definition; for input, they’re essentially a comment that might help the programmer.)

To paraphrase Brad DeLong (talking about Paul Krugman):

  1. Remember that @Jan’s design is correct.
  2. If your analysis leads you to conclude that @Jan is wrong, refer to rule #1.

(There are a few other people whose name can be substituted into this syllogism. I won’t list them, for fear of omitting one or two.)

:sunglasses:

1 Like

Just quoting this so that those that are not familiar with protobufs will pay attention to that. That one bit very many people.

1 Like

I’m curious about what the problem was? That a field might be missing (optional)? Or that unknown fields are silently ignored? (Or something else?)

For me at least was that I was partial to using required fields. Then later for some case learned that they were not really required and so made them optional. Then the problems started with data being rejected because the field was required on the receiving end and not being set on the sending end. It took some time to learn to get things to work but it was a bit of a problem during the transition phase.

The current way I think of using protobufs when needed is to think of them like algebraic data types where there can be many valid patterns and only one of them needs to be valid for the data to be accepted. If you like math theorems then that analogy also works in that there are sets (think ∀) and you only need one subset to work (think ∃). Of if you like compilers then one could think of it as a simplified type inference that checks the input with inference rules but it is not checking types but something else.

So to put it another way, originally I thought of the data as more of an SQL query with keys, foreign keys and relationships and tried to make the so call key values required. After changing my way of thinking it just went better. I don’t know if I will need to change my way of thinking again but for now it seems to work.

I hope that answered your question. I was surprised by your question as I thought everyone ran into the same problem based on the change from required to optional.

1 Like

The same thing happened many times at Google, which is why “required” was removed from proto3 and strongly discouraged for proto2.

Yes. They’re just dumb containers. There are some aids for controlling the contents (e.g., oneof), but they’re pretty minimal. My suspicion is that most people just use them as dumb containers and don’t think a lot about them as tools for data structuring – at least, that was my case (I had no idea how they were encoded on the wire, for example, until I started looking into protobufs.pl – and that was only because I’m helping someone who needs to know those details but is a self-taught JavaScript programmer and doesn’t have the “bare metal” experience that I have.)

My intent is to allow working with existing systems that require protobufs. I’m not advocating for or against them. (And there seem to be some “interesting” interoperability issues between proto2 and proto3, which I’m not sure how to deal with in Prolog.)

SWIPL’s library(protobufs) is a DCG. Each ‘protobuf’ term provides a list of terms that correspond to productions in the parser. It’s design intent is to permit efficient/lossless serializing of hierarchical structured data to/from instances of Prolog via byte streams, while operating arm’s length in a cluster computing environments. In my view, the ‘protobuf’ term, like a regular expression, is not subject to interpretation. It is not an unstructured bag of data (dumb container) to be interpreted willy-nilly. Protobuf_message/2 must generate a wire-stream that is a faithful to the Prolog data structure that was presented to it. The other side must be able to reconstitute that very same data structure, given the same recipe, including unification of variables.

These days, I write my grammars such that ‘protobuf_message/2’ is semi-deterministic. That is, either the wire-stream is valid and well-formed or it isn’t. To do anything else simply is simply unsafe. There can be no misinterpretation, if you want to go to bed at night. That is not to say that you can’t have refinement, according to the principle of Information Hiding. A server may be dispensing a list of codes over the wire-stream that it knows nothing about to a client that knows precisely what to do with them.

But for those who just want to write DCG grammars, here’s my two-cents:

Find a grammar first, if you can. If not, I start by writing one. I write RFC-2234 ABNF grammars. They’re easy to write, easy to translate to Prolog, and they’re machine checkable (for completeness, at least).

If you are from the Lex/Yacc school, or Flex/Bison, and so forth:

  1. start with the (lexical) terminals
  2. I always create a ‘root’ production
  3. all other productions are descendants of the ‘root’

Once you’ve done that and are satisfied that the grammar is complete and correct, copy the grammar to a Prolog source file and comment out the ABNF entirely. Given then ABNF, writing Prolog DCG terminals and productions is straightforward. In no time at all,you will have a parser that’s traceable to the original ABNF.

I didn’t mean that a protobuf is an unstructured bag of data. It is structured – and usually it’s possible to derive the structure of the “wire” format without even using the .proto file. However, there are exceptions; here’s an example, showing 2 different ways to interpret the structure of some wire-formatted data:

?- Codes = [82,9,105,110,112,117,116,84,121,112,101],
      protobuf_message(protobuf([embedded(10, protobuf([integer64(13, I)]))]), Codes),
      protobuf_message(protobuf([embedded(10, protobuf([double(13, D)]))]), Codes),
      protobuf_message(protobuf([string(10,S)]), Codes).
Codes = Codes, Codes = [82, 9, 105, 110, 112, 117, 116, 84, 121|...],
I = 7309475598860382318,
D = 4.272430685433854e+180,
S = "inputType".

To correctly interpret the wire format, it’s necessary to know both the structure of the data (as defined in the .proto file) and how to interpret the individual pieces of data (e.g., is the 64-bit value an integer or a float?).

The current protobuf_message/2 requires knowing the field numbers (or “tags”, as they are called in the Prolog documentation) and details about the encoding (e.g., is a number encoded as varint, varint-zigzag, 32-bits, 64-bits?). This information is available in the .proto file, so I’d prefer using the field name to derive he field number and the detailed type information automatically – that’s what I’m working on right now (writing a protoc plugin that will generate the necessary meta-data).

There’s a second problem with the current protobuf_message/2 – it doesn’t handle the situation where the order of fields in the wire format isn’t known. In practice, the fields are put in order by field number; but the protobuf spec says that they can be in any order.

There’s a minor problem in interpreting wire format: missing fields. But that can be handled with a slight hack: wrap each item in a repeated. In the following, tag 10 is present (and gets a single-element list) and the data with tag 11 is missing (and gets an empty list):

?- Codes = [82,9,105,110,112,117,116,84,121,112,101],
      protobuf_message(protobuf([repeated(10, string(S)),
                                 repeated(11, integer64(I))]), Codes).
Codes = [82, 9, 105, 110, 112, 117, 116, 84, 121|...],
S = ["inputType"],
I = [].

However, if the template specified the fields a different order from how they appear in the wire format, this wouldn’t work. I intend to improve this, so that the fields can be given in any order; and also to eliminate the need to wrap everything in “repeated”, which is a bit tedious.

Please see: peter_test1.pl (1.7 KB)

First, let’s examine the wire-stream example that you’ve provided:

wirestream(Codes) :-
    Codes = [82,9,105,110,112,117,116,84,121,112,101].

%  ?- wirestream(Codes), disect_wire_type(W1, Codes),
%     testit(codes(X)), disect_wire_type(W2, X).
%
% Codes = [82, 9, 105, 110, 112, 117, 116, 84, 121, 112, 101],
%  W1 = wire_type(10, length_delimited, [9, 105, 110, 112, 117, 116, 84, 121, 112, 101]),
% X = [105, 110, 112, 117, 116, 84, 121, 112, 101],
% W2 = wire_type(13, fixed64, [110, 112, 117, 116, 84, 121, 112, 101]).

The only thing that we know for certain is that at the top level (W1), we have a length-delimited list of codes with a tag value of 10, and a length of 9 octets, with the 9-octet payload following. If we speculate and look inside, we see something else (W2) that may or may not be right.

Let’s refactor your example to make it more “Prologish” :

numeric_alternative(integer64(X), Proto) :-
    Proto   = protobuf([ integer64(13, X)]).

numeric_alternative(integer(X), Proto) :-
    Proto = protobuf([integer(13, X)]).

numeric_alternative(double(X), Proto) :-
    Proto   = protobuf([ double(13, X)]).

/*
*/

alternative(X, Proto) :-
    numeric_alternative(X, Payload),
    Proto   = protobuf([embedded(10, Payload)]).

alternative(string(X), Proto) :-
    Proto   = protobuf([string(10, X)]).

alternative(codes(X), Proto) :-
    Proto   = protobuf([codes(10, X)]).

alternative(atom(X), Proto) :-
    Proto   = protobuf([atom(10, X)]).

/*
*/

testit(X) :-
    wirestream(Codes),

    alternative(X, Proto),

    protobuf_message(Proto, Codes).

When we run this program, we see that testit/1 is non-deterministic. As you have pointed out, for the wire-stream presented, it is syntactically correct, but semantically ambiguous.

1 ?- testit(X).
X = integer64(7309475598860382318) ;
X = double(4.272430685433854e+180) ;
X = string("inputType") ;
X = codes([105, 110, 112, 117, 116, 84, 121, 112|...]) ;
X = atom(inputType).

A friend once said, “There are very few rights in a vast sea of wrongs.”

The fact that this is possible at all is mere coincidence. After all, who could’ve imagined that this string, “inputType”, could possibly be misinterpreted as an embedded message with a ‘fixed64’ payload and tag 13? That’s a dangerous defect!

We must eliminate as much conditional entropy as possible, as soon as possible. We do this by being more specific, that is by providing a hint that testit/1 can use to narrow the search and resolve the ambiguity in the Prolog programmer’s favor. The Prolog programmer is expecting a string:

2 ?- testit(string(X)).
X = "inputType".

3 ?- testit(integer(X)).
false.

Testit/1 is now semi-deterministic. That is, there are zero or * exactly one* possible alternatives.

As I said privately, I find this to be a mortal defect in Google’s Protobufs specification. If there is no implied ordering of wire-types in the wire-stream, then there can be no implied ordering of fields in the ‘.proto’ file.

Given this snippet of code:

test_permutation([], []) :- !.

test_permutation([P1 | More], Codes) :-
    alternative(P1, Proto),

    protobuf_message(Proto, Codes, Residue), % we're using the difference-list version

    test_permutation(More, Residue).


testit_permuted(Y) :-
    wirestream(Codes),

    append(Codes, Codes, Codes2),
    append(Codes, Codes2, Codes3), !,

    writeln(Codes3),

    permutation([double(_), integer64(_), string(_)], Y),

    test_permutation(Y, Codes3).

‘testit_permuted/1’ is also nondeterministic:

4 ?- testit_permuted(Y).
[82,9,105,110,112,117,116,84,121,112,101,82,9,105,110,112,117,116,84,121,112,101,82,9,105,110,112,117,116,84,121,112,101]
Y = [double(4.272430685433854e+180), integer64(7309475598860382318), string("inputType")] ;
Y = [double(4.272430685433854e+180), string("inputType"), integer64(7309475598860382318)] ;
Y = [integer64(7309475598860382318), double(4.272430685433854e+180), string("inputType")] ;
Y = [integer64(7309475598860382318), string("inputType"), double(4.272430685433854e+180)] ;
Y = [string("inputType"), double(4.272430685433854e+180), integer64(7309475598860382318)] ;
Y = [string("inputType"), integer64(7309475598860382318), double(4.272430685433854e+180)] ;
false.

I could select one in order to resolve the ambiguity:

22 ?- testit_permuted(Y).
[82,9,105,110,112,117,116,84,121,112,101,82,9,105,110,112,117,116,84,121,112,101,82,9,105,110,112,117,116,84,121,112,101]
Y = [double(4.272430685433854e+180), integer64(7309475598860382318), string("inputType")] ;
Y = [double(4.272430685433854e+180), string("inputType"), integer64(7309475598860382318)] ;
Y = [integer64(7309475598860382318), double(4.272430685433854e+180), string("inputType")] ;
Y = [integer64(7309475598860382318), string("inputType"), double(4.272430685433854e+180)] .

23 ?- testit_permuted($Y), !, writeq($Y), nl.
[82,9,105,110,112,117,116,84,121,112,101,82,9,105,110,112,117,116,84,121,112,101,82,9,105,110,112,117,116,84,121,112,101]
[integer64(7309475598860382318),string("inputType"),double(4.272430685433854e+180)]
true.

‘testit_permuted/1’ is now semi-deterministic. It will only unify with the selected permutation, $Y.

Regards.

I’m working on doing something similar by incorporating the information from the .proto file.
The protobuf_segment_message/2 predicate that I’ve exposed is one step on the way to an easier-to-use protobufs.pl (perhaps I shouldn’t have exposed it – it’s a bit experimental and I might change its design).

I agree that the lack of specification for field order is unfortunate. The documentation gives a reason (merging protobuf wire formats) but it seems a bit weak.

But I don’t get to define the specification; if someone says “your implementation of protobufs can’t read a conforming implementation’s output” I can’t say “but it’s a mortal defect!”; they’ll just use something else. My intended use case is connecting to an existing app (e.g., somewhere in the “Cloud”) that uses a protocol like gRPC that requires protobufs. That app could be implemented in C++, Python, Java, JavaScript or any of the dozen other languages for which there is protobuf support; and I have no way of enforcing things like field order. All I can do is make something that works with whatever already exists and (hopefully) is easy to use.

No, you don’t. But the specification, such as it is, says that “serialization order is an implementation detail.” So, generated wire-streams by protobuf_message/2 are conforming. And I’ve illustrated a means to parse a foreign wire-stream in any permutation of order using mechanisms that are customary to Prolog.

BUT, my primary objective is to preserve order, structure, accuracy, and precision. There’s no way to preserve order and structure (e.g. rank and hierarchy) if a conforming generator can ignore it with impunity.

Perhaps they should. A conforming application can do just about anything it wants – it’s not a grammar, its just a bag of data.

Just because it’s official, doesn’t make it right. Cyber-warfare is upon us. So, have a care.

I do not wish to leave the discussion group with the notion that the foregoing is the preferred use-case for library(protobufs). It is not. What we’ve been discussing pertains to means and methods for inter-working with foreign wire-streams that have been generated by other systems and languages that are utilizing Google’s tools.

For those of you who are writing in Prolog for Prolog, you can expect much more from library(protobufs). But much more is expected from you. Think of library(protobufs) as a toolkit, and not as a turnkey service.

The following screenshot depicts an HMI for a very-large distributed application. This system has National Language Support for U.S. English (usa), French (fra), German (ger), and Danish (dan):

The National Language chosen for the HMI is selected by way of an environment variable that’s provided in the user’s log-in profile. Several National Languages may be supported on the same machine simultaneously.

The HMI application is a SWIPL PCE application for those who don’t recognize it. The HMI has no built-in knowledge about the structure of the menu tree that it will present, or what the fkeys will do. The HMI application only knows how to render the UI widgets, the splash photo, and the fkeys, and how to ask for a menu tree.

The HMI needs two things: ‘fkeys’ for the function keys, and ‘fmenus’ for the menu item hierarchy.

For this illustration, we’re only going to be looking at the ‘fkeys’:

In the Prolog structure illustrated below, a list of ‘fkey’ structures is provided. Each ‘fkey’ structure provides a ‘moniker’ and a list of ‘fno’ structures. Each ‘fno’ structure provides a UI widget association, ‘f1’ through ‘f10’, some fly-over help text for that widget, and a list of actions that should be performed when that widget is pressed/selected, etc.

main_menu(usa, Menu) :-
    Menu =  [
        [fkey(fkeyversion,
		 [ fno(f1, '$s2r31: 3.1.16$', action([func(['MmiFkeyHelp'])]))
		 ]),

	    fkey(fkey1,
		 [ fno(f1,'Help',action([func(['MmiFkeyHelp'])])),
		   fno(f3,'Redraw',action([func(['MmiFkeyRedraw'])])),
		   fno(f4,'Print screen',action([func(['MmiFkeyPrnscr'])]))
		 ]),

         fkey(fkey2,
		 [ fno(f1,'Help',action([func(['MmiFkeyHelp'])])),
		   fno(f2,'Main menu',action([func(['NmGoMain'])])),
		   fno(f3,'Redraw',action([func(['MmiFkeyRedraw'])])),
		   fno(f4,'Print screen',action([func(['MmiFkeyPrnscr'])])),
		   fno(es,'Return',action([exit]))

                 ])
           ],
           [
                % FMenu parts (removed for simplicity)
            ]
            ].

And equivalent ‘.proto’ file looks like this:

syntax = "proto2";

message MMIActionClause {
	repeated string  func       = 130;
    optional bool    exit       = 131;
}

message MMIFnoClause {
	required string moniker     = 121;
    required string help        = 122;
    repeated MMIActionClause action   = 123;
}

message MMIFKeyClause {
	required string moniker    = 110;
    repeated MMIFnoClause fno  = 120;
}

enum Language {
	USA	= 1;
	FRA = 2;
	GER = 3;
	DAN = 4;
}

message MMIMessage {
	repeated MMIFKeyClause fkey = 100; 
//	repeated MMIMenuClause fmenu = 200; 
}

message MMIRequest {
	required Language nls     = 1;
    required MMIMessage fkeys = 2;
}

NOTE: As originally designed, SWIPL’s library(protobufs) is stands-alone. It does not require any tools, libraries, or source code artifacts, to be provided by others in order to be used effectively on SWIPL. (I will admit however, that constructing equivalent ‘.proto’ files for your grammars, and using ‘protoc’ as a validation tools turn out to be really handy.)

Getting back to the illustration:

The menu trees are dispensed by a server process running out on the cluster somewhere. For each member of a ‘language’ enumeration, the server replies with the associated protobuf wire-stream that has been pre-built when the server is launched. The interaction between the HMI and the server is by way of a synchronous transaction over an IPC channel (connectionless socket, TIPC sockets in this case).

The wire-stream transaction between HMI client and server looks like this:

client_sent([8,1]), to(name(20007,10,0))

… about 900 micro-seconds later…

client_rcvd([8,1,18,232,2,162,6,57,242,6,11,102,107,101,121,118,101,114,115,105,111| ...]),
     length(365), from(port_id('<1.1.2:2252316681>'))

Now, all you have to do is reconstitute the ‘main_menu’ structure that we saw earlier. “But wait!”, you’d say, “The structure doesn’t resemble anything that library(protobufs) knows anything about!” And you’d be right. We have provide some extensions that protobuf_message/2 can use to serialize your own structures.

Somewhere on both sides we’d have to provide the following hooks:

%
%  Hooks into protobufs
%

protobufs:message_sequence(Type, Tag, Value) -->
    { my_message_sequence(Type, Value, Proto) },
    protobufs:message_sequence(embedded, Tag, Proto).

%
%  fkey, fno, and action clause message sequence predicates
%
%  See: mmi.proto
%

my_message_sequence(fkey, fkey(Moniker, FNoList), Proto) :-
    Proto = protobuf([atom(110, Moniker),
                      repeated(120, fno(FNoList) )
                     ]).

my_message_sequence(fno, fno(Moniker, Help, action(Actions)), Proto) :-
    Proto = protobuf([atom(121, Moniker),
                      atom(122, Help),
                      repeated(123, action(Actions))
                     ]).

my_message_sequence(action, func(FuncList), Proto) :-
    Proto = protobuf([ repeated(130, atom(FuncList))]).


my_message_sequence(action, exit, Proto) :-
    Proto = protobuf([ boolean(131, true) ]).

And we’d have to register the enumeration:

%
%  Register an enumeration 'language' with protobufs
%

protobufs:language(Key, Value) :-
    nth1(Value, [ usa, fra, ger, dan], Key).

NOTE: We are defining these things in ‘protobufs’ name space, not our own. library(protobufs) can’t see them otherwise.

Now, protobuf_message/2 can treat your own structures as first-class entities that can be ‘repeated’ and so forth. When protobuf_message/2 is presented with the wire-stream returned by the server, it yields a list that contains the ‘fkey’ structures, exactly as we expect it to appear:

[ fkey(
    fkeyversion,
    [ fno(
	f1,
	'$s2r31: 3.1.16$',
	action(
	  [ func(
	      [ 'MmiFkeyHelp'
	      ])
	  ]))
    ]),
  fkey(
    fkey1,
    [ fno(
	f1,
	'Help',
	action(
	  [ func(
	      [ 'MmiFkeyHelp'
	      ])
	  ])),
      fno(
	f3,
	'Redraw',
	action(
	  [ func(
	      [ 'MmiFkeyRedraw'
	      ])
	  ])),
      fno(
	f4,
	'Print screen',
	action(
	  [ func(
	      [ 'MmiFkeyPrnscr'
	      ])
	  ]))
    ]),
  fkey(
    fkey2,
    [ fno(
	f1,
	'Help',
	action(
	  [ func(
	      [ 'MmiFkeyHelp'
	      ])
	  ])),
      fno(
	f2,
	'Main menu',
	action(
	  [ func(
	      [ 'NmGoMain'
	      ])
	  ])),
      fno(
	f3,
	'Redraw',
	action(
	  [ func(
	      [ 'MmiFkeyRedraw'
	      ])
	  ])),
      fno(
	f4,
	'Print screen',
	action(
	  [ func(
	      [ 'MmiFkeyPrnscr'
	      ])
	  ])),
      fno(
	es,
	'Return',
	action(
	  [ exit
	  ]))
    ])
]

If you’ve done your home-work, then tt’s safe to assume that if protobuf_message/2 succeeded, then the structure (Reply) that was returned is both valid and well-formed, for the request that you provided and for the grammar that you specified. The HMI client can use this structure directly to construct the appropriate UI elements.

Protobuf_message/2:

  1. is the “Guardian at the Gate”
  2. for constants, it will generate a wire-stream for encode
  3. for constants, it will unify with a wire-stream on decode

The client side might look like this:

%
% Here's the Client Side.
%
% tipc_client/1 is semidet.
%

tipc_client(NLS) :-
     must_be(constant, NLS),

     Request = protobuf([enum(1, language(NLS))]),

     Payload = protobuf([repeated(100, fkey(Fkeys))]),
     Reply   = protobuf([enum(1, language(NLS)),
                         embedded(2, Payload)]),


     protobuf_message(Request, UpMsg), % Request is valid/well-formed, if true

     tipc_server_address(Addr),

     tipc_service_exists(Addr),        % at least one server is available, if true

     format('client_sent(~w), to(~p)~n', [ UpMsg, Addr]),

     tipc_socket(S, dgram)
       ~> tipc_close_socket(S),

     tipc_send(S, UpMsg, Addr, []),

     % Rendezvous with the Server

     tipc_receive(S, DownMsg, From, [as(codes)]), !,

     length(DownMsg, Len),

     format('client_rcvd(~w), length(~d), from(~p)~n', [DownMsg, Len, From]),

     protobuf_message(Reply, DownMsg),  % Reply is valid/well-formed, if true

     print_term(Fkeys, [indent_arguments(2)]).

The worker loop on server side might look like this:

process_requests(S) :-

    Request = protobuf([enum(1, language(NLS))]),

    Reply   = protobuf([enum(1, language(NLS)),
                        codes(2, Codes)]),

    repeat,

      tipc_receive(S, UpMsg, Peer, [as(codes)]),

      format('server_rcvd(~w), from(~p)~n', [UpMsg, Peer]),

      protobuf_message(Request, UpMsg),

      once(fkey_msg(NLS, Codes)),

      protobuf_message(Reply, DownMsg),

      writeln(server_replies(DownMsg)),

      tipc_send(S, DownMsg, Peer, []),

    fail.

Finally, if our National Language selection is French, the we’d see:

Regards.