“Lips” is “Logical inferences per second”.
It comes from an old benchmark of “naïve reverse” (you can see the code here) that did a simple loop – each call was called an “inference” (it involved a simple unification).
As a general rule, CPU time is roughly proportionate to number of inferences (or calls). You can get a rough idea of the complexity of the calls by looking at the “Lips” number – the higher the number, the simpler the calls and unification.
The % CPU is how much CPU you were getting – it’s essentially the wall time divided into the execution time. 4% indicates that your program was mostly waiting – perhaps something else was running at higher priority?
If you want to understand more about your code and why it’s slow, you might look into profiling.
First algorithm: 22 samples in 0.20 sec; 453 predicates; 2034 nodes in call-graph; distortion 0%
Second algorithm: 38 samples in 0.38 sec; 411 predicates; 1717 nodes in call-graph; distortion 4%
What is the difference between both of them? Which is seems better performance?
What samples means?
What is the # of predicates means?
What is the # of nodes means?
And what the distortion means?
Hopefully it will be my last question in this topic - It is possible to measure the DB size and memory usage? Can I use statistics/2 for that?
Profiling is for finding the “hot spots” in your code (see show_profile/1).
However, you don’t have many samples, so the results might not be very accurate.