Dealing with state

Well of course I know the edcg pack :). I believe we already have spoken about it on some issue I had with edcg which is the need to define prototypes of every edcg predicate and the lack of push-back list.

By the way, I built my own rbtrees state threading library usable inside a classic DCG here.
The idea is that you can interact with the rbtree using the predicate state//1. For example, state(o(key, Value)) will do the equivalent of rb_lookup(key, Value, Tree).
So the o(key, Value) compound does a lookup, +(key, NewValue) insert or update a value and -(key, OldValue, NewValue) updates a value while getting the old value.

I use it extensively in my music notation grammar and I found it to work really well.

How would you insert a new line while maintaining the order of lines ?

Every time state pops up I think we need something better :slight_smile: Using DCGs with push-back lists, optionally with some dictionary is of course an option, but it doesnā€™t look nice. A motivating example for delimited continuations was that it can deal with grammars without requiring DCG expansion.

I still do not have a concrete idea that feels right though. IMO it touches on the problem to have variable scoping beyond clauses. Completely global scope is too much and comes with its own problems. Probably you want scoping to all children of a given goal. We can actually hack that in current SWI-Prolog using something like

with_state(Goal, State) :-
    call(Goal),
    nolco(State).

nolco(_).

state(State) :-
   prolog_current_frame(Me),
   prolog_frame_attribute(Me, parent_goal(with_state(_, State))).

Of course, this is a little slow, but it gets closer to the desired semantics. Now however we need to advance the state. We can do so by using a dict for the state and use b_set_dict/3. This however changes the notion of the state in the entire tree. Alternatively we can use with_state/2 recursively.

I guess it should be possible to support this sort of stuff efficiently at the VM level. The main question is what it should look like at the Prolog level?

Well, I completely agree :slight_smile:

Well, maybe itā€™s because Iā€™m working with the DCG notation for a long time now for my music notation grammar, but I kind of started to appreciate itā€¦

I donā€™t think using a term expansion is a weakness. In the contrary, I think that term expansion is a very elegant solution to the problem. (I suppose I still hold this view because I have not looked at the code for the expansion itself ^^)
One thing I really like with the dcg notation that is not taken care of by your solution is that DCG goals are visibly different from normal prolog clause, either by their neck or by their reference sequence//2.
With your solution, there is no way to differentiate syntatically that a clause is using some external state or not until runtime.

In my opinion, THE missing thing from DCG is the ability to thread multiple accumulators, which is the main feature of edcg but I really donā€™t like the use of prototypes.
Maybe we could extend the [Value] notation of DCG by adding an index, very much like in edcg: [Value]:1 to use the 1st accumulator or [Value]:2 to use the second one ? Except that you would not need prototypes since indexes would be the same for everybody.
It would also be backward compatible with existing DCGs.

If you want to name accumulators, you could use a macro [Value]:#myacc and we could overload phrase/3 to be useable inside a grammar to thread accumulators differently:

a -->
  [a]:1,
  [b]:2,
  phrase(b, 2), % use only the second accumulator
  phrase(c, 2, 1). % accumulator 1 become 2, 1 become 2
1 Like

Mild case of Stockholm Syndrom? :slight_smile:

Yes. Whether that is good or bad is unclear. Using state is a property of the code that asks for some care. But, so is using cuts, constraints, tabling, assert/retract and a bunch more. Why would using state be so much different? Translating full DCGs (no-non-dcg code) is fairly straightforward. As we start to mix with non-dcg code we have to decide when to do the unification of the remaining tail. There is currently a fair consensus about that, but that has taken a while, introduces overhead and causes sometimes unexpected results. There has also be quite some discussion of which primitives need {} and which not.

You have more experience with edgc, but do we not need to know how many accumulators exist? Seems edcg uses a list of accumulators. That probably gets rather slow quickly. Using a dict would allow for named accumulators.

ahah, probably ^^

Ah, I did not thought of that. I see how mixing with non DCG code would be much more elegant.

Of course, you are rightā€¦
Here is a joke idea: use a longer neck for for each additional accumulator :slight_smile:
So --> has 1 accumulator, ---> has 2, etc.
Or even worse: 2--> has two accumulator, 3--> has three, etc

2 Likes

Are delimited continuations able to do what Monads do in Haskell?
(My ā€œto doā€ list includes attempting to fully understand both of these; so far my best attempt at Monads is to think of them as a kind of run-time DCG expansion. But thereā€™s also this: A Neighborhood of Infinity: You Could Have Invented Monads! (And Maybe You Already Have.))

BTW, edcgs allow dispensing with {ā€¦} notation, although I donā€™t like the solution because it seems to be error-prone (edcg marks the predicates that get expanded; everything else is used as-is, with an implicit {ā€¦}).

The venerable MTS operating system had a notion of line-numbered files. Fractional line numbers were allowed; if a line was inserted between lines 10 and 11, it would typically get a label like 10.3 (I suspect some kind of Fibonacci or logarithmic calculation to get the value; whatever it was, it worked pretty well in practice). People typically did this until they had too many fractional line numbers, at which point they would renumber the entire file.
So, that might work.

No, AFAIK. Delimited continuations allow you to write coroutines by suspending a computation, allowing some other code to proceed and resume the suspended computation with additional data. In this case the coroutines are the parser and a simple routine that consumes the input from a list.

I never really liked the example as, rather than suspending and resuming, we can do the work of the coroutine as an interrupt, which is way more efficient than suspend/resume. This type of coroutining does allow implementing tabling though and could, at least in theory, be used for async support in the WASM version. Both may also be implemented using engines. For async support that is obvious. Paul Tarau claims the suitability of engines for tabling. It is still something Iā€™d like to explore one day.

I also have another remark about this.
When using a dcg state, I often use its logical purity and often reorder the clauses when debugging.
For example:

a -->
  state(StateIn, StateOut),
  dif(StateIn, StateOut).

In this case, reordering state/2 and dif/2 does not impact the logical corectness of the code.
Would it be possible to handle such use-case with Ian proposal ?
Does engines handle this use case ?

Here is an attempt at a much simplified alternative to edcg. It works on top of normal DCG, assuming a dict to hold the multiple state variables. I used Op<StateName to operate on the various state variables. Op is one of

  • List<StateName for a normal DCG terminal
  • String<StateName for a SWI-Prolog string acting as a terminal
  • Callable<StateName. When found, we call call(Callable, V0, V) to update the state for StateName
goal_expansion(<(On, Name,State0,State), Clause) :-
    expand(On, Name, State0, State, Clause).

expand(Literal,Name,State0,State, Clause), is_list(Literal) =>
    Clause = get_dict(Name, State0, List, State, Tail),
    append(Literal, Tail, List).
expand(String,Name,State0,State, Clause), string(String) =>
    Clause = get_dict(Name, State0, List, State, Tail),
    string_codes(String, Literal),
    append(Literal, Tail, List).
expand(Step,Name,State0,State, Clause), callable(Step) =>
    extend_goal(Step, [V0,V], StepEx),
    Clause = ( get_dict(Name, State0, V0, State, V),
               StepEx
             ).

Here are some examples

Interleave two lists

test_interleave(L) :-
    call_dcg(t, #{a:`abcde`, b:`12345`, c:L}, _).

t -->
    [L]<a,
    [N]<b,
    [L,N]<c,
    t.
t -->
    {true}.

Compute the length of a list

test_len(List, Len) :-
    call_dcg(list_len, #{len:0, list:List}, S),
    Len = S.len.

list_len -->
    [_]<list, !,
    increment<len,
    list_len.
list_len -->
    {true}.

increment(X0, X) :-
    X is X0+1.

Sum the elements in a list

test_sum(List, Sum) :-
    call_dcg(list_sum, #{sum:0, list:List}, S),
    Sum = S.sum.

list_sum -->
    [E]<list, !,
    add(E)<sum,
    list_sum.
list_sum -->
    {true}.

add(E, X0, X) :-
    X is X0+E.

The performance of test_sum/2 is about half of the library sum_list/2.

Comparison

Compared to keeping a state dict somewhere in the environment, this approach is clean in the sense that is does not require destructive update of the state but creates new instances of the state. The price is that GC must deal with these new instances, there is more state copying and we need DCG expansion to thread the state dict through the computation. It performs quite well without the need for any new VM support.

Keeping a state dict in the environment does not require any source code transformation, but requires destructive updates to the state. This would be fine if we know the old state is inaccessible anyway, but otherwise it has dubious properties. It is also slow, but I think it could get comparable performance with more low-level support.

2 Likes

Interesting !
push-back list also just works:

a, [1]<a -->
   [0]<a.

?- a(#{a:[X]}, #{a:[Y]}).
X = 0,
Y = 1.

One question though: What is the fundamental difference from using a state with push-back list in a DCG ?

Is it the fact that this approach uses goal_expansion and is therefore much more efficient ?

Because instead of a goal expansion, I believe we could define a dcg clause <//2 that would perform the same kind of state manipulation at runtime ?

You are reaching the edges of my DCG knowledge for (mis)using it for state. I think

a, [S] --> [S0].

was invented to deal with state while remaining compatible with phrase that deals with lists. Historically this was not really the case and the two additional variables just represented an evolving state without being specific about the nature of this state and ā€˜Cā€™/3 ā€œsteppedā€ the state. SWI-Prolog introduced call_dcg/3 to allow calling a DCG without the type checks, so we can use them with arbitrary state types. Note that this makes DCG terminals meaningless. That also holds for the above state trick though.

Funnily I didnā€™t realize that. But yes, you can implement it at runtime and goal_expansion/2 just improves the efficiency. Use ?- listing(pred). to see what it creates.

Here is a classic implementation of Jan idea, which is much slower:

% please, ignore the fact that '<2' is not an operator
'<2'(G, Acc, StateIn, StateOut) :-
   get_dict(Acc, StateIn, Value, StateOut, NewValue),
   call_dcg(G, Value, NewValue).

test_sum2(List, Sum) :-
    call_dcg(list_sum2, #{sum:0, list:List}, S),
    Sum = S.sum.

list_sum2 -->
   '<2'([E], list),
   !,
   '<2'(add(E), sum),
   list_sum2.
list_sum2 -->
   {true}.

Here is quick non-scientific benchmark:

?- numlist(0, 100000, L), time(sum_list(L, Sum1)), time(test_sum(L, Sum2)), time(test_sum2(L, Sum3)).
% 200,005 inferences, 0.018 CPU in 0.018 seconds (99% CPU, 11296477 Lips)
% 500,018 inferences, 0.063 CPU in 0.064 seconds (99% CPU, 7904739 Lips)
% 2,000,039 inferences, 0.184 CPU in 0.184 seconds (100% CPU, 10891331 Lips)
L = [0, 1, 2, 3, 4, 5, 6, 7, 8|...],
Sum1 = Sum2, Sum2 = Sum3, Sum3 = 5000050000.

DCTGs are a bit different from EDCGs. The referenced implementation is an interpreter (although it could easily be made into source transformation, I think), which builds an intermediate tree and then walks it. Iā€™ve used DCTGs to make a reversible implementation of cdecl (see https://cdecl.org/) - you can give it either a C declaration or the English ā€œexplanationā€ and itā€™ll generate the other (I once tried doing a reversible implementation without the DCTG formalism and abandoned it as too much work).

BTW, DCTGs are written up in the book _ Logic Grammars_ by Harvey Abramson and VerĆ³nica Dahl; I have an autographed copy. :slight_smile:

EDIT: I was responding to an earlier post that mentioned DCTGs, but it appears to have been edited to remove the reference.

1 Like

Even though get_dict/5 is a native builtin, it basically does a get_dict/3 and then a put_dict/4.
The above has a slight speed advantage if StepEx is non-deterministic, since
then the new Prolog dict is only created once, and reused in all solutions.

But if StepEx is deterministic or semi-deterministic, you create an extra variable hole which
needs to be filled, and also if StepEx fails you have created a new dict for nothing. So I am
little bit suspicious whether hard-wiring this idiom makes sense.

You could also provide two primitives:

current(Key, Value, State, State) :- get_dict(Key, State, Value).
set(Key, Value, State, State2) :- put_dict(Key, State, Value, State2).

And then the end-user would have the choice, either use this DCG idiom:

... -->
   ...
   current(attr, V),
   {semi_det_step(V, V2)},
   set(attr, V2),
   ...

Or then use this DCG idiom:

... -->
   ...
   current(attr, V),
   set(attr, V2),
   {non_det_step(V, V2)},
   ...

The end-user would be also free to have step itself beeing some DCG,
and not only some auxiliary procedure inside DCG {}/1, allowing DCG logic
where the state update is dependent on the parsed or generated text.

I would like to take a step back on this discussion about efficiency.
For example, in my experience of using a dcg state in my music notation grammar, using and modifying the state will never ever be the speed bottleneck of my grammar.

Moreover, I think we all agree that the original sum_list/2 predicate will always be faster than using a DCG with state.
I believe that the reason for that is that unpacking the list in question and using it as the first argument of the predicate allows prolog to do First Argument Indexing.
So maybe, the best advice would be to say: If you have a hot code path that involves one accumulator of your state, first unpack it and use it explicitly.

So maybe, we should better concentrate on which syntax would be the most clear to understand ?

2 Likes

Yes, and this requires quite some program transformation.
For example ordinary DCG, especially in SWI-Prolog,
because of head movement of the first terminals in the body,

and because of deep indexing, can for example efficiently execute:

command --> [foo], command_foo.
command --> [bar], command_bar.
...

The head movement is seen when you do listing/2. Both
foo and bar landed in the head. But they are inside a list
cell, so SWI-Prologs deep indexing is also required:

?- listing(command/2).
command([foo|A], B) :-  command_foo(A, B).
command([bar|A], B) :-  command_bar(A, B).
....

Now the idea behind the (<)/2 syntax would be, that one can
simply change the code, and annotate it with the attribute that
carries the input ouput list:

command --> [foo]<io, command_foo.
command --> [bar]<io, command_bar.
...

But how should the above be indexed, like the old DCG was
indexed in SWI-Prolog? Quite difficult to do with SWI-Prolog
dicts, which are notoriously hard to index.

So one rewriting would be, which can also be applied for ordinary
DCG, as a substitute for deep indexing. The below only needs
first argument indexing, if the first DCG parameter is a first argument:

command --> [X]<io, command2(X).

command2(foo) --> command_foo.
command2(bar) --> command_bar.
...
1 Like

Iā€™m not sure about the relation between multiple threaded state variables and classical DCGs as parser/serializer. The dict base approach seems to be a way to handle named threaded state variables in a fairly easy and efficient way. That is all. My question is whether this is sufficiently interesting to turn it into a proper library and, if so, how we settle the details (naming, etc.). We could also decide that we have enough similar libraries, so why bother ā€¦

This morning, I have finally read the EDCG technical report which is relatively short and I found this observation from the introduction which I found quite clear:

Full quote

It is desirable to program in a purely applicative style, i.e. within the pure logical subset of Prolog. (ā€¦) However, in practice the number of arguments of predicates written in this style is large, which makes writing and maintaining them difficult. Two ways of getting around this problem are:

  1. to encapsulate information in compound structures which are passed in single arguments
  2. to use global instead of local information

(ā€¦) but neither is a satisfying way to program in Prolog, for the following reasons:

  • Because Prolog is a single-assignement language, modifying encapsulated information requires a time-consuming copy of the entire structure. Sophisticated optimizations could make this efficient, but compilers implementing them do not yet exist.
  • (ā€¦ something about global variables)
    A third approach with neither of the above disadvantages is extending Prolog to allow an arbitrary number of arguments without increasing the size of the source code.

In particular this quote:

Because Prolog is a single-assignement language, modifying encapsulated information requires a time-consuming copy of the entire structure. Sophisticated optimizations could make this efficient, but compilers implementing them do not yet exist.

At the time, I suppose that using a compound structure like a rbtree or a dict equivalent to maintain state was seen as too inefficient. However, from my own experience, I believe that nowadays, there are use-cases where we can use a dict or dict equivalent to maintain a large number of states while maintaining acceptable speed.

So if I can further my previous advice:

  • If you need maximum speed:
    • you have 1 or 2 threaded state variables and your program is small: unpack them and use them explicitly
    • you have 2 to 5 threaded state variables or your program is big: use edcg
    • if you have a lot of threaded state: use a dict together with edcg to unpack speed critical state and put the rest in the dict.
  • If you dont need speed:
    • use a dict or equivalent in a classic DCG

In order to demonstrate that you donā€™t need to compromise on speed with edcg, here is an implementation of sum_list/2 which is as fast as the original:

edcg:acc_info(sum, T, In, Out, Out is In + T).
edcg:pred_info(sum_list2, 0, [dcg, sum]).

sum_list2 -->>
   []/dcg.
sum_list2 -->>
   [H],
   [H]:sum,
   sum_list2.

And the main reason why this version is faster is because with edcg, we can decide the order of the accumulators, and choose the one as first argument that will most benefit from First Argument Indexing !

1 Like

By the way, after reading the edcg report I found this gem at the very end:

  1. Tips and techniques

Usually there will be one clause of pred_info for each predicate in the program. If the program become very large, the number of clauses of pred_info grows accordingly and can become difficult to keep consistent. In that case it is useful to remember that a single pred_info clause can summarize many facts. For example, the following declaration:

pred_info(_, _, List).

gives all predicates the hidden parameters in List. This keeps programming simple regardless of the number of hidden parameters.

So, with this, my complaint that I did not like the use of prototypes is gone, since I can define one single prototypes for all clauses !

1 Like