A quick micro-benchmark for the different database mechanisms (treated as a key store). These database mechanisms have different advantages and disadvantages (backtracking, erasing methods, logical update view, etc) but I just wanted to get a rough idea.
I don’t like micro-benchmarks too much, but it is useful for a relative comparison:
With SWI-Prolog, linux, threaded, 64 bits, version 8.1.13-27-gd480a2001-:
14 ?- go.
rec_gen
% 2,000,001 inferences, 0.377 CPU in 0.377 seconds (100% CPU, 5305095 Lips)
rec_lookup
% 2,000,001 inferences, 0.355 CPU in 0.355 seconds (100% CPU, 5633298 Lips)
rec_lookup (2nd time)
% 2,000,001 inferences, 0.356 CPU in 0.356 seconds (100% CPU, 5620478 Lips)
nb_gen
% 4,000,001 inferences, 0.255 CPU in 0.255 seconds (100% CPU, 15704181 Lips)
nb_lookup
% 2,000,001 inferences, 0.131 CPU in 0.132 seconds (99% CPU, 15222120 Lips)
nb_lookup (2nd time)
% 2,000,001 inferences, 0.131 CPU in 0.131 seconds (100% CPU, 15274105 Lips)
asrt_gen
% 2,000,001 inferences, 0.604 CPU in 0.607 seconds (100% CPU, 3313618 Lips)
asrt_lookup
% 2,000,001 inferences, 1.516 CPU in 1.523 seconds (100% CPU, 1318850 Lips)
asrt_lookup (2nd time)
% 2,000,001 inferences, 0.381 CPU in 0.382 seconds (100% CPU, 5254804 Lips)
true.
So nb_getval
and nb_setval
are the fastest as would be expected (with the limitation of only one value per key). The record
database is fast on the first run of lookups and maintains the same speed on further lookups. The assert
database is slow on the first lookup, but reaches record
db speeds afterwards.
I wonder if the assert
lookup speeds up considerably on the second run because of JIT indexing?
With the stable version (linux, threaded, 64 bits, version 8.0.3) things are faster in general:
2 ?- go.
rec_gen
% 2,000,002 inferences, 0.313 CPU in 0.313 seconds (100% CPU, 6394146 Lips)
rec_lookup
% 2,000,002 inferences, 0.262 CPU in 0.263 seconds (100% CPU, 7621965 Lips)
rec_lookup (2nd time)
% 2,000,002 inferences, 0.259 CPU in 0.260 seconds (100% CPU, 7711132 Lips)
nb_gen
% 4,000,002 inferences, 0.168 CPU in 0.168 seconds (100% CPU, 23870140 Lips)
nb_lookup
% 2,000,002 inferences, 0.101 CPU in 0.101 seconds (100% CPU, 19743315 Lips)
nb_lookup (2nd time)
% 2,000,002 inferences, 0.101 CPU in 0.101 seconds (100% CPU, 19876580 Lips)
asrt_gen
% 2,000,002 inferences, 0.626 CPU in 0.628 seconds (100% CPU, 3194608 Lips)
asrt_lookup
% 2,000,002 inferences, 0.904 CPU in 0.906 seconds (100% CPU, 2212180 Lips)
asrt_lookup (2nd time)
% 2,000,002 inferences, 0.289 CPU in 0.289 seconds (100% CPU, 6931781 Lips)
true.
I guess the (dynamic) tabling code is making thinks a little bit slower in the dev version.
Here is the micro-benchmark code (not perfect, but enough for a rough idea) would like to see what you get on your machine:
go :-
test(rec),
test(nb),
test(asrt).
test(Name) :-
N = 1 000 000,
atomic_list_concat([Name,'_',gen],Gen),
atomic_list_concat([Name,'_',lookup],Lookup),
atomic_list_concat([Name,'_',clear],Clear),
ignore(call(Clear,N)),
writeln(Gen),
time( call(Gen,N) ),
writeln(Lookup),
time( call(Lookup,N) ),
write(Lookup), writeln(' (2nd time)'),
time( call(Lookup,N) ).
rec_clear(N) :-
forall( between(1,N,I),
( recorded(I, I, Ref), erase(Ref)
)
).
rec_gen(N) :-
forall( between(1,N,I),
( recordz(I, I)
)
).
rec_lookup(N) :-
forall( between(1,N,I),
( recorded(I, I)
)
).
nb_clear(_).
nb_gen(N) :-
forall( between(1,N,I),
%( atom_number(A,I), nb_setval(A, I)
( nb_setval('500', I)
)
).
nb_lookup(N) :-
forall( between(1,N,_),
( nb_getval('500', _)
)
).
asrt_clear(_) :-
retractall(keydb(_,_)).
asrt_gen(N) :-
%retractall(keydb(_,_)),
forall( between(1,N,I),
( assertz(keydb(I,I))
)
).
asrt_lookup(N) :-
forall( between(1,N,I),
( keydb(I,I)
)
).
EDIT: the difference between stable and dev is not much, so it could be within the statistical error.