I’m getting familiar with tabling by running the fib/2 example from the SWI documentation.

:- table fib/2.

test :-
        time(fib(1000, X)),
        format('~w~n', [X]).

fib(0, 1) :- !.
fib(1, 1) :- !.
fib(N, F) :-
        N > 1,
        N1 is N-1,
        N2 is N-2,
        fib(N1, F1),
        fib(N2, F2),
        F is F1+F2.

It works fine but the number of inferences required on the second and subsequent executions seems /too/ good.

?- test.
**39,032** inferences, 0.016 CPU in 0.016 seconds (100% CPU, 2498048 Lips)
?- test.
**5** inferences, 0.000 CPU in 0.000 seconds (?% CPU, Infinite Lips)

I tried it on SWISH and each execution has the same resource requirements:

39,022 inferences, 0.022 CPU in 0.022 seconds (100% CPU, 1743120 Lips)

According to Wikipedia:

Subgoals encountered in a query evaluation are maintained in a table, along with answers to these subgoals. If a subgoal is re-encountered, the evaluation reuses information from the table rather than re-performing resolution against program clauses.

The 5 inferences make it look like the tabling is persisting between (top level) goal executions. Or maybe this is some feature of the top level I don’t understand.


Tables are indeed persistent. As SWISH runs each query in its own isolated environment that does not hold between SWISH queries.

As such, the persistency is a good thing. You computed it once, so why do it again? That only makes sense if the data from which the table was computed has changed. We have incremental and monotonic tabling to deal with that. Both maintain a graph of dependencies from dynamic predicates to tables. Incremental tabling simply invalidates tables that depend on a change to a dynamic predicate. Monotonic tabling distinguishes assert from retract. On retract it behaves as incremental, but on an assert it pushes the new answers through the dependencies. That only works if asserted facts do not invalidate answers through negation, hence its name monotonic.

In addition to this, you can always discard any table at any moment as the system will recompute the table when needed. Once upon a time we should implement mechanisms to manage table resources that automatically destroys tables based on some statistics (time to compute, memory consumption and access frequency). That probably needs to wait until some user has a real demand for this and the resources to make this happen (i.e., do it or pay for it :slight_smile: )

From what i understand you can only assert positive facts – i guess this means that an asserted fact is a negated goal in a body of a rule that would now succeed?


Yes. p(X) :- \+ data(X). is enough to make assert(data(1)) invalidate p(1).