Prolog without SLG resolution?

I have recently been solving advent of code 2023 puzzles in Prolog and found myself heavily reliant on tabling/SLG for tractability. I note that gprolog and SICStus do not seem to support SLG. It made me wonder, how Prolog would even handle these problems without SLG resolution.

Lets take the example from the manual featured here:

fib(0, 1) :- !.
fib(1, 1) :- !.
fib(N, F) :-
        N > 1,
        N1 is N-1,
        N2 is N-2,
        fib(N1, F1),
        fib(N2, F2),
        F is F1+F2.

One could just add :- table fib/2. and be done with it in SWI Prolog, but how would you handle this in a performant way using only portable Prolog code? Would you have to resort to assert or similar or is there some other pattern?

Can use memo/1 from Memoization

:- dynamic memo_/1.

memo(Goal) :-
    (   memo_(Goal)
    ->  true
    ;   once(Goal),
        assertz(memo_(Goal))
    ).

fib_memo(0, 1).
fib_memo(1, 1).
fib_memo(N, F) :-
    N > 1,
    N1 is N-1,
    N2 is N-2,
    memo(fib_memo(N1, F1)),
    memo(fib_memo(N2, F2)),
    F is F1+F2.

Evidence of caching:

?- time(fib_memo(100, F)).
% 896 inferences
F = 573147844013817084101.

?- time(fib_memo(99, F)).
% 7 inferences
F = 354224848179261915075.

?- time(fib_memo(100, F)).
% 7 inferences
F = 573147844013817084101.

?- time(fib_memo(101, F)).
% 16 inferences
F = 927372692193078999176.

A variation is at memoisation

swi-prolog has: "memo" pack for SWI-Prolog

1 Like

The link above mentions passing around an association list as another option. That looks like a more realistic route, as I understand assert to be slow. Some of the AoC problems require a large number of puts to the cache and involve many cache hits. Presumably hashtable would make more sense than an assoc list but I guess it isn’t portable.

SWI-Prolog has nice JIT multi-argument indexing.
It might take some time before it kicks in. I just noticed
that my recent frontier/flooded solutions behaves different:

/* Cold Run, Higher Inference Count */
?- time(count(64,X)).
% 215,473 inferences, 0.031 CPU in 0.040 seconds (77% CPU, 6895136 Lips)
X = 3615.

/* Warm Run, lower Inference Count */
?- time(count(64,X)).
% 181,775 inferences, 0.016 CPU in 0.026 seconds (59% CPU, 11633600 Lips)
X = 3615.

The lower inference count has to do as soon as indexes are
present, the Prolog system can also perform choice point elimination?
Or some other JIT-ing artefact? Not 100% sure whats going on.

We should also not neglect the graden/3 facts, which need also a
JIT-ed indexe. I don’t know whether hashtable implementations can
compete with such implementations, since SWI-Prolog

does this all natively, right?

Edit 30.12.2023
Here you see the indexes it has created:

?- jiti_list(flooded/2).
Predicate                                     Indexed  Buckets Speedup Flags
============================================================================
user:flooded/2                                  1+2      4,096  4095.0   
true.

?- jiti_list(frontier/2).
Predicate                                     Indexed  Buckets Speedup Flags
============================================================================
true.

it didn’t create an index for frontier/2 since frontier is called
with wild card query. On the other hand a crucial part of the
algorithm is to see whether something was already flooded,

the Just in Time indexing created a multi-argument index,
on argument 1 and then on argument 2. This could mean two
hashtable cascaded, which might better perform than one

hashtable, since computing a hash code of a pair is a science
in itself, with a lot of pitfalls. Basically the JIT multi-argument indexing
might be quite Datalog friendly. Whereas the single hashtable might choke.

Disclaimer: Could be wishful thinking, this is just some speculation.

SWI Prolog variant hash already creates a collision for very simple pairs:

?- variant_hash(0-0, X), Y is X mod 4.
X = 11974557,
Y = 1.

?- variant_hash(0-1, X), Y is X mod 4.
X = 2233065,
Y = 1.

?- variant_hash(1-0, X), Y is X mod 4.
X = 8284106,
Y = 2.

?- variant_hash(1-1, X), Y is X mod 4.
X = 14352072,
Y = 0.

There is already a collision for 0-0 and 0-1 when computing modulo 4.

Edit 30.12.2023
For example MURMUR hash is only needed to index floating
point numbers, 32-bit floats and 64-bit floats, since their bit pattern
is strange, the significant stuff is shifted to the left.

But for integers you don’t really need MURMUR most of the time.
It only slows down the hashing. For pairing there are rather simple
methods which are quite effective in terms of spreading the code.

Although a more serious research would be possibly needed to give
the advice to not use a single hashtable for pairs and/or like exclude MURMUR
hash computation. Nevertheless MURMUR has a good reputation.

In N3 we do it as a small tweak to the recursion (from double to single) eye/reasoning/fibonacci/fibonacci.n3 at master · eyereasoner/eye (github.com)
The queries eye/reasoning/fibonacci/fibonacciQ.n3 at master · eyereasoner/eye (github.com) give instantaneous answers eye/reasoning/fibonacci/fibonacciA.n3 at master · eyereasoner/eye (github.com)

The corresponding prolog code looks like

$ cat fibonacci.pl
fibonacci(A, B) :-
    fib(A, 0, 1, B).

fib(0, A, _, A) :-
    !.
fib(1, _, A, A) :-
    !.
fib(A, B, C, D) :-
    A > 1,
    E is A-1,
    F is B+C,
    fib(E, C, F, D).

which gives

$ time swipl -f fibonacci.pl -g "member(M, [0, 1, 6, 91, 283, 3674]), fibonacci(M, F), writeln(F), fail; halt"
0
1
8
4660046610375530309
62232491515607091882574410635924603070626544377175485625797
295872959797101479478634366815157108100573212705250690577871041398423606408217262643449728342664061812585639168722421830407677671667740585806703531229882783069925750619720511808616484846128237251921414441458265138672827487722512845223115526738192067144721087756159352711138340620702266509343657403678256247195010013499661223527119909308682062873140767135468966093474944529418214755911968500799987099146489838560114063096775586903976827512299123202488315139397181279903459556726060805948910609527571241968534269554079076649680403030083743420820438603816095671532163428933363322524736324029745871445486444623006627119156710782085648303485296149604974010598940800770684835758031137479033374229914629583184427269638360355586190323578625395157899987377625662075558684705457

real    0m0.033s
user    0m0.027s
sys     0m0.009s
1 Like

We also follow the fibonacci(0) - Wolfram|Alpha (wolframalpha.com) line that fibonacci(0) = 0

Fibonacci is not a great example. Such a rewrite of the problem does not appear applicable to e.g. Advent of code 2023 - #160 by brebs

I would say that an explanation is missing. Something like
fib(N,A0,A1,An) - An is the N-th element of the
Fibonacci sequence begining with A0,A1

For a first argument indexing solution, all I see is that a hybrid
association table doesn’t incure an unbearable overhead, works fine:

/* Trealla Prolog 2.32.13 */
?- time(fib_memo2(100000, _)).
% Time elapsed 0.398s, 999996 Inferences, 2.515 MLips
   true.

/* SWI-Prolog 9.1.21 */
?- time(fib_memo2(100000, _)).
% 599,994 inferences, 0.797 CPU in 0.874 seconds (91% CPU, 752934 Lips)
true.

/* Dogelog Player for Java 1.1.5, using a hybrid */
?- time(fib_memo2(100000, _)).
% Zeit 973 ms, GC 0 ms, Lips 5858160, Uhr 04.01.2024 10:23
true.

/* Scryer Prolog 0.9.3 */
?- time(fib_memo2(100000, _)).
   % CPU time: 1.052s, 1_299_990 inferences
   true.

Was using this trimmed version of memoization, warning
the below code is not steadfast in the 2nd argument:

:- dynamic(memo2/2).

fib_memo2(N, F) :- memo2(N, F), !.
fib_memo2(0, 1) :- !.
fib_memo2(1, 1) :- !.
fib_memo2(N, F) :-
    N1 is N-1,
    N2 is N-2,
    fib_memo2(N1, F1),
    fib_memo2(N2, F2),
    F is F1+F2,
    assertz(memo2(N, F)).

Taking notes to myself: Why is Trealla Prolog so fast? Something
to do with the speed of the built-in predicate assertz/1?