Elevator Example and the Prolog VM

In recent versions you can write

 Head[, guard] =>, $, Body.

$/0 tells the system that the body should succeed deterministically. Note that this is just an error (or warning). The compiler doesn’t do anything with it. You can also use det/1 to declare your predicates to be deterministic. Again, this is merely a pretty cheap runtime check that should help you to make your code deterministic. For 99% of the cases once/1 should be deprecated. It prunes choice points too late.

I know i am laboring this point so many times – but, what would it take to “pause” choice point creation until the end of a body – in a “flat” way – i.e. a called goal in a body – is new game – i.e. it would require its own =!>, to pause its choice point creation.


My assumption is that, although creation of choice points is apparently minimal overhead – it does add up over calls that are turned deterministic through cuts, for example. And, hence, there would be a performance hit overall (as once/1 causes, and in the manner I had been misusing once/1).


But, are there no choice points created due to a determination that happened during compile time - or is it a runtime check/decision?

My understanding is that its a runtime determination – and when it comes to choice points Prolog either determines on its own during runtime that no choice point is needed; or it fails to determine that, and then its up to the developer to prune them.

So, if my understand is correct, then there would be a minimal overhead for each call – at minimum to determine that no choice point creation is required.

But, is it correct that a mechanism such as first argument indexing is a runtime mechanism rather than a compile time one. Or is it a compile time determination.

But, again, in my mind the key question is when swi-prolog determines that no choice point creation is needed, during compile time or during runtime and through a test that happens, say, “in-between” VMI calls?

Why once/1 is deprecated? In favor of what? We use once/1 to get first solution (if any) of a nondet predicate.

Can you elaborate?

Yes, but that doesn’t matter as the logical update view semantics tell us that a clause that is added after we started the goal cannot be backtracked into :slight_smile:

So it is simply if we hit the last clause of the relevant clause set we do not need a choice point (unless debug mode is enabled). The notion of relevant clause set is sometimes a compile time decision and sometimes a runtime decision. The documentation on clause indexing is fairly explicit about that. Still, this is not set in stone. Future versions are likely to improve on that as avoiding choice points is crucial in allowing for both logically correct and efficient code as it allows for deterministic code if the call is sufficiently instantiated and logically correct non-deterministic code if this is not the case. Creating and quickly killing a choice point is only painful for nearly trivial code. For anything doing a bit more work it doesn’t matter too much. What does matter is leaving choice points open for a long time. That prevents last-call optimization which is slower to begin with (but again, this doesn’t matter too much) but notably prevents old data to be garbage collected, making memory grow quickly and causing garbage collection time to grow (GC is linear in the size of the reachable data).

And of course ;/2 as well as various built-in predicates created choice points.

1 Like

Not sure I like backtrackable asserts, but there is an existing design for updatable (and potentially backtrackable) arrays that seems to fit well with Prolog, namely chapter 11 of Dijkstra’s “a discipline of programming” (out of print, I think: there’s a PDF here: https://seriouscomputerist.atariverse.com/media/pdf/book/Discipline%20of%20Programming.pdf).

Dijkstra’s design allows 0- and 1-origin arrays (in fact, any origin); there are algorithms that are more convenient with 1-origin, so that might be a nice thing to keep.

The operations are as follows – “.” is used for accessing and “:” is used for changing the array (the “:” is from the Algol assignment operator “:=”). I don’t claim that these are the best set of operations, but they’re an interesting point to start from: they allow treating the array as an entity; all the operations are roughly the same complexity; and they seem to fit well the Jan’s design for dicts (including destructive backtrackable updating)

  • ax.lob - array ax’s lower bound (typically 0; sometimes 1)
  • ax.hib - array ax’s high bound
  • ax.dom - the number of elements in ax (that is: ax.hib - ax.lob + 1)
  • ax[i] - the i-th element of ax
  • ax.low - abbreviation of ax[ax.lob]
  • ax.high - abbreviation of ax[ax.hib]
  • ax:alt(i,x) - change the i-th element of ax to x
  • ax:shift(k) - shift all the elements by k (probably most useful for converting 0-origin to 1-origin and vice versa)
  • ax:hiext(x), ax:lowext(x) - push an element at the end or the beginning
  • x=ax:hipop, x=ax:lopop - pop an element from the end or the beginning
  • ax:swap(i,j) - swap the values at i and j (no-op if i==j)

All the “:” operations should backtrack in Prolog, similar to how b_set_dict/3 works)

Forget about “backtrackable asserts” and look at Dijkstra’s operations on arrays – currently, array-like processing in Prolog is an O(log N) operation and involves copying the structure – backtrackable arrays would be an O(1) operation. (Dicts are also O(log N) but could become O(1))

If you know that the array has no other references, then no need to copy when updating. But that requires having a reference count, I suppose.

Let me try to clarify.

I was pointing to the dichotomy in Prolog between the fact base that is considered non-backtrackable (out of the box), and Terms referenced by variables during goal execution, which are by design backtrackable – no need to do anything to make them so.

The backtrackable asserts technique is (merely) a consequence of this design dichotomy – not a means to improve performance.

However, in my work I need backtrackable properties on facts in the fact base.

Currently, the performant way to achieve backtrackable properties on facts, during processing is to thread along a pair of variables a mapping between atoms and their properties – such as through an assoc.

Implicit in this design is that the atom is an argument of a selected predicate, e.g. person(dan), so, its dan the person who gets attributed not necessarily the stand-alone dan atom.

In a language like C such a dichotomy between “static” (fact-base like) and Variable referenced memory, doesn’t exist, hence an atom within the person structure and its property live in the same type of memory space, and the allocated atom can have a pointer to its properties.


Perhaps something like this (aligned with the backtrackable variables technique (e.g. [1])):


put_priority(X, Y) :- person(X)&Y0, Y0 <= Y.

get_priority(X, Y), person(X)&Y0, Y0 => Y.

And suppose that & is a distinguished operator to set or get attributes stored with the clauses person(alice), person(dan), etc.

And the operators <= and => set and get the values – all backtrackable.

All this would only make sense if in the course of indexed retrieval of person(X) – the clause, say, person(dan), in memory is located, and getting at the attribute term would then only a pointer hop away.


p.s. whether clause attributes survive at the end of goal processing might be an option – they could behave like global variables – or could be transient – probably transient is the better (default) option.

[1] logtalk3/assignvars.lgt at e18aee671ac0d90d8b50c66a6a5a1047b815bddf · LogtalkDotOrg/logtalk3 · GitHub

Indeed. It is the predicate’s supervisor code that selects clauses. The clauses themselves have no knowledge about other clauses.

1 Like

I moved this thread over to a new thread.

Just as a quick aside – the example in my code is not related to hypothetical reasoning, but to regular processing of facts and meta-data generated as part of it.

Its possible that some generated meta data could be memoed more permanently but this would require investigating under what conditions such meta-data could become invalidated.


Could you point me to where in the supervisor code a determination / decision is made whether to create a choice point or not prior to calling a clause’s VMIs.

My reply was to @grossdan, please feel free to ignore what I wrote.

For the upteenth time, the Elevator Example here is not concurrent . You can even not parallelize it.

Yes you can. You could make use of restricted-AND parallelism and stream-AND parallelism. I’m taking these terms from Steve Gregory’s Parallel Logic Programming in Parlog.

Restricted-AND: the two arithmetic operations are independent.
Stream-AND: the two arithmetic result variables can be streamed to the tail call.

I’ve given you a good enough reference to see how this can be implemented (and how it was done in Parlog).

It doesn’t.

I’ll stop posting in this thread when I don’t see something that really should be corrected.

Your points (a) and (b) are misinterpretations of what I wrote. I merely gave a reason for C/C++ being faster on modern CPUs, then I gave an example of a logic programming language used a systems programming language. That’s it. Nothing more, nothing less. I don’t “want” anything.

I guess, the next interesting target for comparison might be WASM