Is there an easy way to view "stack traces" with the interpreter?

Just a small redundant explanation concerning:

% NOTHING IS BROKEN. You will see scary red messages.
% THEY DO NOT EXIST ON MY WINDOWS MACHINE AT ALL.
% And nothing is broken. I checked. It all works. I don't know what's up.
% But it works on my computer, and you can even query the things SWISH
% says doesn't exist and they WILL exist. Okay?

Red predicates are entry points. That is predicates that are
not called by other predicates. But the analyzer that decides the
color, doesn’t understand what expand_term/2 does, and that

in fact trait/4 is indirectly called by has_trait/1, has_trait_type/2, etc..
Maybe red is a little bit too agressive color for the purpose of
an entry point. It has also a double meaning, if its red, and it is not the

head of a clause, but a goal of the body, it means missing definition
But also the same problem here, the analyzer cannot extrapolate what
term expansion will do. I don’t know whether there is some notebook

technique that could avoid both. Except you could try at the top of the file:

:- dynamic(asserted/1).

The color now changes from red to pink for body occurences. And it
also changes the operational semantic of the expanded predicates a little bit,
since dynamic might be differently compiled than static. I don’t know whether

there is a directive static/1 that would say, hey I will be expanded?

True. Best is to use modules. Then you can export your entry points, making it a lot easier to understand the code and they become bold and blue.

Well, the cross referencer does run term expansion if it can. So, it is wise to run the development tools while the code is loaded. This does not work for SWISH. Works fine for local development. If for some reason you frequently need to restart Prolog (due to side effects of running the program), I usually run two Prolog instances: one with the code loaded for editing and one to run the code.

Then there are some tricks such as if you want term expansion to generate clauses for e.g., p/4, use something like this, i.e., expanding the same predicate as it produces.

term_expansion(p(dummy, _, _, _), Clauses) :- ...

p(dummy,_,_,_).

If you really want, the library(prolog_colour) has hooks that allow you to make it understand what you are doing and (re-)define the style for identified fragments. An extensive example of this is in library(http/html_decl). Not worth the trouble for a simple program, but worth considering if you define some infrastructure you’ll be developing for an extensive period of time. You can also use that with SWISH, but only if you run your own server, so you can load these extensions.

Thanks, but I’m talking about something a little different, and I don’t care too much about the color red.

It has to do with these scary red messages: right here. They say my code doesn’t exist, then happily query from it.

Meanwhile, SWI-Prolog on Windows displays nothing of this sort.

Might be a SWISH issue. Can you post a link to the program on SWISH ?

A popular technique is a kind of staging. Instead of using expansion,
one would have two Notebooks, one is the Notebook with the original
rules and a transpiler, and one is a Notebook with the transpiled rules,

and helper predicates. If I am not mistaken Alain Colmerauer already
proudly showed such a feature for his Prolog v0. You can turn an
expansion into a transpiler in that you rename your expansion rule

form term_expansion/2 to term_transpile/2, and then have something:

transpile(Out) :-
    tell(Out),
    clause(H, B),
    term_transpile((trait(X, P, V, Y) :- B), L),
    member(C, L),
    write(C), write('.'), nl,
    fail.
transpile(_) :-
    told.

?- transpile('out.pl').

But I don’t know whether SWISH offers upload or storage space.
I saw somewhere an example where Excel Sheets were uploaded?
Maybe SWI-Tinker could do it, and store it in Browser storage.

So the problem is out.pl will probably be a local file and you would do
it locally. If you install a local Notebooks server, you can possibly do
such stuff. But lets say I would want to use SWISH, does it have a

remotely a current working directory for each user?

Disclaimer: Maybe out.pl will be huge compared to original, because
of Metzlers trick. So not sure whether what I wrote is even a good idea
for the problem at hand.

The public version does not. If you run your own server you can do more, but the public version is fully stateless.

Yes, Tinker can do this. It offers a virtual filesystem from Emscripten and browser storage based persistency for part of that.

But then, this does not add anything new compared to using term expansion. Note that if term expansion requires more context, you can have one term expansion rule that simply asserts the terms read, mapping them to [] and then you expand end_of_file by translating all asserted terms and return a list of clauses (and/or directives). That is, for example, how CHR works in SWI-Prolog (while originally it was a transpiler).

Yeah, thats something that gets overlooked sometimes
even in the Datalog case. Take this example:

path(X, Y) :- path(X, Z), edge(Z, Y).
path(X, Y) :- edge(X, Y).

With loop detection but without some fixpoint iteration,
it would only compute path = edge. So stack traces/checking
for cycles, either with b_setval/2 or with term_expansion/2,

gives results, that have something missing. But I wonder
whether Prolog systems have some alternatives to the
above must in stock. I read CHR in this thread, can it solve

left recursion? Modern tabling does it via SLG resolution
(XSB, etc..), but it can be also done via OLD resolution (Picat, etc..).
On my side I was mentioning streaming computations,

basically bottom up methods, which would also solve the
problem by going from backward chaining to forward chaining,
trivially also solve left recursion.

Some incremental bottom up methods can perform better for right recursion and
double recursion? But I am not aware of a popular library for SWI-Prolog that
would do that. I also don’t know whether EYE reasoner could do it.

I was studying some Soufflé papers. They reported 2.0 secs for
a random graph with 1000 vertexes and 10000 edges. To compute
the full transitive closure. This was reported in 2016, I tried the same

now that we are in 2026 and with SWI-Prolog tabling. The left
recursive version shows again 2.0 secs, and I have a little more powerful
machine than the 3.0 GHz machine they used, it boosts to 5.0 GHz:

/* SWI-Prolog 10.1.5
/* left recursion via tabling */
?- between(1,3,_), random_edges, time(test), fail; true.
% 16,010,054 inferences, 1.891 CPU in 1.926 seconds
% 15,992,042 inferences, 1.891 CPU in 1.897 seconds
% 16,010,042 inferences, 1.828 CPU in 1.888 seconds
true.

But whats interesting, tabling helps also solve other forms of
recursion, like right recursion and double recursion. Such recursion
probably also come up in the posted problem by @mathmaster13 .

:- table path2/2.
path2(X, Y) :- edge(X, Y).
path2(X, Z) :- edge(X, Y), path2(Y, Z).

:- table path3/2.
path3(X, Y) :- edge(X, Y).
path3(X, Z) :- path3(X, Y), path3(Y, Z).

The situation is a little bit worse for these forms of recursion:

/* right recursion via tabling */
?- between(1,3,_), random_edges, time(test2), fail; true.
% 75,322,867 inferences, 3.609 CPU in 3.650 seconds
% 75,322,334 inferences, 3.609 CPU in 3.648 seconds
% 75,322,686 inferences, 3.594 CPU in 3.645 seconds
true.

/* double recursion via tabling */
?- between(1,3,_), random_edges, time(test3), fail; true.
Action (h for help) ? abort
% 1,034,087,279 inferences, 49.141 CPU in 50.689 seconds
% Execution Aborted

BTW: Was using this data generator, most of the time the resulting
transitive closure is the full graph K_1’000, with 1’000’000 path/2
entries, hinting quadratic complexity somehow?

random_edges :-
   abolish_all_tables,
   retractall(edge(_,_)),
   between(1,10000,_),
   random(W), A is floor(W*1000),
   random(V), B is floor(V*1000),
   assertz(edge(A, B)),
   fail.
random_edges.

In a way, you could say tabling turns backward chaining into forward chaining for the minimal cases needed to make left-recursion terminate. I.e., It delays the left-recursive calls and when answers come from the other clauses it resumes the delayed goals with these answers (from the table).

That also holds for tabled negation: if you can’t solve it now, delay the tnot/1 goal until some answer has been added to a table that allows you to solve it. For negation, that implies if an answer is added to the table, the negation fails. If the table is completed without an answer the negation succeeds. This resolves a lot of cases and can be considered forward chaining. We may end up with mutually dependent negations that cannot be resolved. Then we have some additional logic to decide or decide the negation and its dependent goals are undefined.

At least, that is how I try to understand tabling at a conceptual level. It also explains that the termination conditions are the same as with forward chaining, i.e., any program for which every goal has a finite number of answers terminates. In other words, the only way to get non-terminating programs is by having a goal that has an infinite number of answers like any integer (given unbounded integers), any list, etc.

Thats the SLG way to speak of it. But I am assuming the OP wants a
kind of exhaustive enumeration. My question was rather whether
there are other libraries around that do it differently. In as far

I am interpreting the OPs needs as an exhaustive query:

?- path(X,Y), write(X-Y), nl, fail; true.

In such queries, traditionally adorned with mode ff in old database
theory. There is not much going on backward chaining. I don’t
think the variant keys will look differently than always path(_,_)

for left recursion? But its very hard to beat SWI-Prolog SLG tabling
nevertheless. My suspicion the tabling subsystem uses its own
table spaces, with its own indexing and creates a paralleling

implementation of a dynamic database. And for some reasons this
paralleling space is more performant than the usual assert/retract
database. At least I was not yet able to beat this here:

with the dynamic database the maximum is 2x times slower:

/* SWI-Prolog 10.1.5
/* left recursion via dynamic */
?- between(1,3,_), random_edges, time(test), fail; true.
% 23,020,000 inferences, 3.625 CPU in 3.664 seconds
% 23,020,000 inferences, 3.547 CPU in 3.620 seconds
% 23,020,000 inferences, 3.594 CPU in 3.634 seconds
true.

This is somehow good and bad news. It shows that there is
a dynamic database and algorithms out there, namely the SLG
tabling with tries and the extension by a table/1 directive,

that has more performant indexing than the ISO assert/retract
or more clever algorithms. I am not sure why this simple code
cannot beat tabling, its a little mysterious for me:

/* transitive closure via dynamic database */
:- dynamic path/2.

% test
test :-
   edge(X, Y),
   post(X, Y),
   fail.
test.

% post(+Integer, +Integer)
post(X, Z) :-
   \+ path(X, Z),
   assertz(path(X, Z)),
   edge(Z, Y),
   post(X, Y).

Tabling (as discussed here recently) stores its tables in tries. The structure is a trie of tries, where the outer trie maps goal (variants) to answer tries. Adding an answer to a trie if it is not there is typically faster than a dynamic DB query and assert as the query and adding is one pass and in this case adding the new path is just adding one new atom to a hash table.

There is more, as tabling only pushes new answers through the delayed goals why you go over the whole lot until nothing can be added. But, the tabling approach has a high price in suspending and resuming computations. There are two ways to do that. XSB goes back in history, runs the suspended goal and then forwards again to the current state. SWI-Prolog uses delimited continuations, which effectively copies a part of the stack and restores this to run the suspended goal.

Your approach also gets more complicated with more complex goal dependencies.

Since the bottom up example doesn’t suspend much, I don’t think
this explains how to do it more quickly. The call is only path/2 with
mode (-,-). So that the left recursive call is again mode (-,-), and

there is basically only one node, aka predicate and call pattern,
in limbo. In old database theory this was called a ff node, but modern
mode notation would says its a (-,-) node. I don’t know how you

want to call it, but all arguments are output, so that the system is
forced to compute the full transitive closure. But for that the system
needs to only create a fixpoint for this single node in the call graph, so

no difficult dependency to find some complicated mutual recursion,
which needs a simultaneous fixpoint computation or something.
I find that SWI own SLG implementation is by a factor 2x behind the

SLG-WAM from XSB. Recall SWI had 1.926, 1.897 and 1.888 secs:

/* XSB Version 5.0-beta */
/* left recursion via tabling */
?- between(1,3,_), random_edges, time(test), fail; true.
% 1.141 CPU in  1.133 seconds (100% CPU)
% 1.093 CPU in  1.093 seconds (100% CPU)
% 1.078 CPU in  1.089 seconds (98% CPU)

XSB also beats the Soufflé paper on the same 2025 machine:

/* Soufflé 2016 */
/* left recursion, 2026 rerun */
Parse time: 0.0193307sec
---------------
count_paths
n
===============
1000000
===============
Total time: 1.3141731000000001sec

I suspect it has less to do with delimited continuations, but
rather how the tries are implemented. And somehow I now
suspect that tries do not really parallel a dynamic database,

they have probably also some mechanism for database deltas,
so that semi-naive fixpoint computation can be deployed, but
I don’t know where this would be documented for SWI or XSB,

and why ordinary dynamic databases do not have deltas?

No. The trie implementation is fairly competitive to XSB. The issue is that the logic for turning a normal predicate into a tabled predicate is all in Prolog for SWI-Prolog and much more low-level in XSB. As a result overhead of tabled calls is much higher than for XSB. That could be resolved by using a dedicated predicate supervisor rather than the generic predicate wrapper supervisor and make this dedicated supervisor figure out the involved table and its status.

The initial SWI-Prolog tabling implementation was all in Prolog by Benoit Desouter, proving delimited continuations is a building block to implement tabling with little impact on the Prolog VM. In the current implementation you can still recognise the design, although a lot of the Prolog code by Benoit was moved to C and I implemented most of the XSB features. This progressed pretty fast thanks to weekly telcos with Theresa Swift and David Warren from XSB (and Benjamin Grossof providing the resources to make this happen).

The suspension mechanism has been compared in the literature (for stack copying, but that is essentially what SWI-Prolog does). Bottom line is that it is easy to create programs where either of them is better and thus it is hard to assign a clear winner. You can easily see why, SWI-Prolog depends mostly on the depth and size of environment stack frames that propagates from one tabled call to the next, while XSB mostly depends on the number of bindings created between these two points.

I don’t say SWI is not competitive, SWI does a quite an amazing job
at tabling. But sometimes competitive means +/- 10% and not +/- 100%
performance difference. Is “logic” the problem? Do you have an utility

that shows how many shift/reset are performed? Namely how often delimited
continuation is used? In the present example there should be only one
shift/reset, since there is only one mode (-,-) node:

/* (-,-)  ~~>  (-,-)  ~~>  (+,-)  */
path(X, Y) :- path(X, Z), edge(Z, Y).
path(X, Y) :- edge(X, Y) .

There is only one worker W for the mode (-,-), its not that you have different
workers W_K for keys K such that path/2 gets called with path(K,_), since
the mode would be (+,-) and it would spawn all the time a new worker.

So the cost for this example rule set and then this query:

?- path(X, Y), fail; true.

Is O(1) for the worker creation and the rest is the cost of the tries or whatever
iteration. I am using now the notion “worker” instead of “node” according
to this paper. There is the cost of calling a continuation but this also only O(1),

there is not much extra cost during backtracking of continuation right?

Tabling as a Library with Delimited Control
https://www.ijcai.org/Proceedings/16/Papers/619.pdf

What is the cost of the tries? It depends both on the number of
edges |E| and the number of vertexes |V|. You can run the
Soufflè test case with |E| = 1000 and diverting from the paper

with |V| = 10000 and try |V| = 5000, 10000, 15000, 20000:

/* SWI-Prolog 10.1.5 */
?- between(1,4,N), random_edges(N), time(test), fail; true.
% 10,875,413 inferences, 1.125 CPU in 1.157 seconds (97% CPU, 9667034 Lips)
% 16,010,042 inferences, 1.891 CPU in 1.920 seconds (98% CPU, 8468121 Lips)
% 21,015,042 inferences, 2.562 CPU in 2.652 seconds (97% CPU, 8200992 Lips)
% 26,020,042 inferences, 3.234 CPU in 3.314 seconds (98% CPU, 8044844 Lips)
true.

It will show you whether the data structure has a |V| factor if
its a hash or a |V| log(|V|) if its for example a b-tree. Soufflé
uses a b-tree by default. The SWI-Prolog trie is ok, it shows a |V| factor.

Still XSB has somewhere a faster data pipeline inside a single worker W:

/* XSB Version 5.0-beta */
?- between(1,4,N), random_edges(N), time(test), fail; true.
% 0.688 CPU in  0.694 seconds (99% CPU)
% 1.11 CPU in  1.104 seconds (100% CPU)
% 1.516 CPU in  1.513 seconds (100% CPU)
% 1.906 CPU in  1.922 seconds (99% CPU)
N = _h401

But both are time linear in the example, only with a different
steepness and different zero cost:

Only left recursion. If it killed all infinite recursion it would cause incompleteness.

Not clear. If there are new variables something’s changing them and that something can eventually result in termination.

Now I wonder if I should write some code to do this dynamically, i.e. check whether a predicate is tabled and re-order its clauses in the database. Could the compiler do that as an optimisation?

I routinely run out of table space in my ILP stuff. Learning recursive predicates is even more dangerous than coding them by hand :sweat_smile:

I think the original paper on bottom-up computation with Datalog was Bancilhon F., Ramakrishnan R. An amateur’s introduction to recursive query processing strategies. in: Proc. of ACM SIGMOD’86.

Here are a few open-source projects on Datalog that I know of; I don’t know which algorithms they use or how fast they are compared to Prolog tabling.

and I’ve skimmed some papers by Kristopher Micinski and his students, and there have been some discussions on Mathstodon
See also Yedalog: Exploring Knowledge at Scale

In the range restricted Datalog case, it terminates also for right and double
recursion. You can easily prove it via program operator and in that the
Herband domain is finite, if you don’t have function symbols only constants.

It would be pretty useless for the semantic web, if it would not always
terminate for Datalog. And the example by the OP is also Datalog except
for the extra explanation argument. It should also terminate for Datalog

with Negation (Datalog¬) or less range restricted cases, but it goes into
3-valued solution if the negation is not stratified. What it cannot do is
Datalog with Disjunction (DLV), while Answer Set Programming (ASP)

can do it. It only does Datalog not that fast, the run time is usually left <
right < double. If you choose a smaller dimension |V| and |E| data set
the double recursion will also terminates. The Soufflé data set already

caused a pretty long wait for double recursion, so I aborted it:

The left recursion is fastest since it creates only one worker W. The right
recursion already creates many workers W_K iteratively since for the right
recursion the mode (+, -) for path2/2 appears, when you call it with mode (-, -):

/* (-,-)  ~~>  (-,-)  ~~>  (+,-)  */
path2(X, Y) :- edge(X, Y).
path2(X, Z) :- edge(X, Y), path2(Y, Z).

If you can eliminate the worker creation overhead, right recursion can be
made to run close to left recursion in speed, right? In the end the clever
algorithms for path are similar to Dijkstra’s Algorithm with Fibonacci Heaps.

I think I found a way to list the workers. The recently annouced
library(tableutil) does the job to see the variants that the workers were
responsible for. For left recursion I only get one worker.

/* SWI-Prolog 10.1.5
/* left recursion via tabling, |E| = 5000 */
?- random_edges(1), time(test).
% 10,788,121 inferences, 1.156 CPU in 1.183 seconds
true.

?- tstat(_, _).
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
       Top Number of answers count per variant                 
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
user:path(_,_) ................................ 980,099
true .

For right recursion, subsumption based tabling could reduce the
number or workers, since (+,-) is subsumed by (-,-). But I didn’t try
subsumption option yet. Without subsumption I get tons of workers.

And data is computed twice, the path(_,_) will hold the path(K,_) union,
this might explain that my right recursion benchmarking for SWI shows
more effort, than the left recursion benchmarking for SWI:

/* right recursion via tabling, |E| = 5000 */
?- random_edges(1), time(test2).
% 39,647,694 inferences, 1.875 CPU in 1.976 seconds (95% CPU, 21145437 Lips)
true.

?- tstat(_,_).
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
       Top Number of answers count per variant                 
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
user:path2(_,_) ............................... 979,108
user:path2(927,_) ................................. 991
user:path2(266,_) ................................. 991
user:path2(661,_) ................................. 991
user:path2(0,_) ................................... 991
user:path2(395,_) ................................. 991
user:path2(798,_) ................................. 991
user:path2(129,_) ................................. 991
user:path2(532,_) ................................. 991
user:path2(919,_) ................................. 991
true ;
Etc...

profile/1.

True. Activation is not the problem here. After activation it feeds edges into the continuation, which implies retrieving the continuation, calling it (guarded by delimited continuation as it may hit a new tabled call). On success, add the new result and again feed that into the continuation.

According to profile/1, 63% goes into `$tbl_wkl_add_answer’/4, a foreign predicate that adds an answer to the table and, if it is new, to the “worklist”, scheduling the new one to be pushed through the continuation as well. 24% goes to calling the continuation, about half of that to the actual edge/2 call.

Most (70%) of the time of `$tbl_wkl_add_answer’/4 goes to updating the trie. Still, if I recall correctly, comparing trie operations between SWI-Prolog and XSB show now big difference. Anyway, the goal of the tabling project was first of all to provide a second implementation of tabling including incremental tabling and well formed semantics as well as bunch of other unique features in XSB’s tabling. We never got to looking careful at optimizing the implementation. There is probably quite a bit of fairly low hanging fruit.