Declarative predicates by limiting resources

I always prefer declarative predicates (if possible) because:

  1. they force my mind to think about the problem in a different, better, way (this is the most important reason)
  2. They are easier to read and understand when I read them afterwards.
  3. They are usually more efficient.

So I was making a little exercise to describe a binary tree using declarative predicates. The first try was:

btree(bt(A,B)) :-
   ( btree(A); leaf(A) ),
   ( btree(B); leaf(B) ).

leaf(l(_)).

It works great for grounded queries (not that useful).

I also want the predicate to be able to answer the most general query, namely btree(T).

Obviously the above will not terminate.

One solution

So my first thought went to tabling, but tabling won’t terminate either in this case (for the most general query), because tabling needs a finite set of answers.

Okay, so what is the real problem? The problem is that we have an infinite recursion which doesn’t end. This brought me to think about call_with_depth_limit/3 and this solution:

depth(4). %This is just for testing, can be replaced by a complex algorithm.

btree(T) :-
   depth(D),
   call_with_depth_limit(btree_(T),D,_).

btree_(bt(A,B)) :-
   ( btree_(A); leaf(A) ),
   ( btree_(B); leaf(B) ).


leaf(l(_)).

This works great, and it answers the most general query:


12 ?- btree(T).
T = bt(bt(bt(l(_7898), l(_7902)), bt(l(_7912), l(_7916))), bt(bt(l(_7932), l(_7936)), bt(l(_7946), l(_7950)))) ;
T = bt(bt(bt(l(_7898), l(_7902)), bt(l(_7912), l(_7916))), bt(bt(l(_7932), l(_7936)), l(_7940))) ;
T = bt(bt(bt(l(_7898), l(_7902)), bt(l(_7912), l(_7916))), bt(l(_7926), bt(l(_7936), l(_7940)))) ;
T = bt(bt(bt(l(_7898), l(_7902)), bt(l(_7912), l(_7916))), bt(l(_7926), l(_7930))) ;
T = bt(bt(bt(l(_7898), l(_7902)), bt(l(_7912), l(_7916))), l(_7920)) ;
T = bt(bt(bt(l(_7898), l(_7902)), l(_7906)), bt(bt(l(_7922), l(_7926)), bt(l(_7936), l(_7940)))) ;
T = bt(bt(bt(l(_7898), l(_7902)), l(_7906)), bt(bt(l(_7922), l(_7926)), l(_7930))) ;
T = bt(bt(bt(l(_7898), l(_7902)), l(_7906)), bt(l(_7916), bt(l(_7926), l(_7930)))) ;
T = bt(bt(bt(l(_7898), l(_7902)), l(_7906)), bt(l(_7916), l(_7920))) ;
T = bt(bt(bt(l(_7898), l(_7902)), l(_7906)), l(_7910)) ;
T = bt(bt(l(_7892), bt(l(_7902), l(_7906))), bt(bt(l(_7922), l(_7926)), bt(l(_7936), l(_7940)))) ;
T = bt(bt(l(_7892), bt(l(_7902), l(_7906))), bt(bt(l(_7922), l(_7926)), l(_7930))) ;
T = bt(bt(l(_7892), bt(l(_7902), l(_7906))), bt(l(_7916), bt(l(_7926), l(_7930)))) ;
T = bt(bt(l(_7892), bt(l(_7902), l(_7906))), bt(l(_7916), l(_7920))) ;
T = bt(bt(l(_7892), bt(l(_7902), l(_7906))), l(_7910)) ;
T = bt(bt(l(_7892), l(_7896)), bt(bt(l(_7912), l(_7916)), bt(l(_7926), l(_7930)))) ;
T = bt(bt(l(_7892), l(_7896)), bt(bt(l(_7912), l(_7916)), l(_7920))) ;
T = bt(bt(l(_7892), l(_7896)), bt(l(_7906), bt(l(_7916), l(_7920)))) ;
T = bt(bt(l(_7892), l(_7896)), bt(l(_7906), l(_7910))) ;
T = bt(bt(l(_7892), l(_7896)), l(_7900)) ;
T = bt(l(_7886), bt(bt(l(_7902), l(_7906)), bt(l(_7916), l(_7920)))) ;
T = bt(l(_7886), bt(bt(l(_7902), l(_7906)), l(_7910))) ;
T = bt(l(_7886), bt(l(_7896), bt(l(_7906), l(_7910)))) ;
T = bt(l(_7886), bt(l(_7896), l(_7900))) ;
T = bt(l(_7886), l(_7890)).

In a very efficient way (just ca. 300 inferences):

13 ?- time((btree(T),fail)).
% 343 inferences, 0.000 CPU in 0.000 seconds (94% CPU, 5723153 Lips)
false.

Instead of chosing the most complex tree first, we can rearrange the order to choose the simplest tree first:

btree_(bt(A,B)) :-
   (  leaf(A); btree_(A) ),
   (  leaf(B); btree_(B) ).

Good.

The obvious disadvantage of the recursion limiting approach is that we have to figure out a proper depth for the call, but I can live with that, given the advantages of the declarative model.

One can implement iterative deepening with this model also. It is clean and efficient.

  • Do you guys have a better way (remember one of the requirements is to be able to answer the most general query with a declarative model)?
  • Do you see any major problem with this approach ?

EDIT: Note that my point is not specifically about binary trees, I know there are many more efficient and fast ways to do that.

Why not simply:

btree(l(_)).
btree(bt(A,B)) :-
    btree(A),
    btree(B).

This definition allows you to both recognize and generate binary trees.

Well, I wasn’t considereing a leaf l(_) as a tree, but I prefer yours if we accept a simple leaf as a tree.

Just realized my second solution actually terminates without the need for call_with_depth_limit/3:

btree_(bt(A,B)) :-
   (  leaf(A); btree_(A) ),
   (  leaf(B); btree_(B) ).

So this is what I learned:

  • when modeling a predicate declaratively, put the simple data structures first; of course I knew this from left-recursion problems, and from the normal prolog practice of putting the base case first, but it didn’t click until now.

I am looking for an example now that can’t be reduced to termination by simply changing the order of clauses. If you know of any I’d appreciate it.

My goal is really to identify a number of generic patterns that can be used to declaratively model infinite recursion problems, even if call_with_depth_limit/3 has to be used.

Note that you still have something that is not really nice: you get bts they grow bigger and bigger in the second argument, but the first always remains a leaf. In other words, you do not generate all possible trees. Iterative deepening solves that. I wouldn’t know another way using such a simple approach. One is to define a mapping from integers to trees and generate the integers.

You’re right, I am back to my first solution with call_with_depth_limit/3.

Now, with iterative deepening, if we try --let’s say with a depth of 4-- and we want to try the next depth, let’s say 10;
the predicate would repeat the computations for levels 1-4.

This led me to think about using tabling to prevent re-computation of previously computed depths.

But I ran into a strange result:

depth(4). %This is just for testing, can be replaced by a complex algorithm.

btree(T) :-
      depth(D),
      call_with_depth_limit(btree_(T),D,_).

:- table btree_/1.
btree_(bt(A,B)) :-
   (  leaf(A); btree_(A) ),
   (  leaf(B); btree_(B) ).

leaf(l(_)).

Query:

60 ?- btree(T).
true.

Is this a bug or am I missing something? I would expect to get the same answer as when btree_(_) is not tabled.

Perhaps it is because of the interaction of tabling with call_with_depth_limit/3.

Tabling gets you into a quite different call mechanism that doesn’t seem to work very well with call_with_depth_limit/3. You get more meaningful results using e.g.,

depth(2). %This is just for testing, can be replaced by a complex algorithm.

btree(T) :-
      depth(D),
      btree_(T, D).

:- table btree_/2.
btree_(bt(A,B), D) :-
   D > 0,
   D1 is D - 1,
   (  leaf(A); btree_(A, D1) ),
   (  leaf(B); btree_(B, D1) ).

leaf(l(_)).

But it doesn’t help as the tables are based on the depth. The way you tried won’t work either because even if the interaction with call_with_depth_limit/3 would work, the first enumeration will fill the table and subsequent calls with a higher depth limit see no reason to re-evaluate the table.

The overall idea behind iterative deepening is that typically the number of solutions at the next level is a lot higher and re-generating the couple of solutions from the previous depth is irrelevant. If you want to get rid of the duplicates use distinct/2, but be aware it is costly in terms of memory.

1 Like

Hmm. Yes, I suspected that SLG interaction with call_with_depth_limit/3 might have trouble.

Perhaps something like call_with_depth_limit/3 can be emulated for tabling using lattice mode, and we table btree/1 instead of btree_/1. What do you think?

My real interest is not in iterative deepening itself, but in solving the infinite recursion problem, perhaps by limiting resources. I just picked recursion depth as the resource to limit, but I really don’t mind how we limit it, it could be inferences with call_with_inference_limit/3 or anything else.
BTW, call_with_inference_limit/3 has the same problem with tabling (SLG) as call_with_depth_limit/3.

I think if we find a general solution to this problem it would be very, very useful for prolog as a whole.

1 Like

call with continuation and limit

Another option is to have a predicate:
call_with_continuation(Goal, DepthLimit, Continuation)

This predicate would call Goal with a depth limit that works like call_with_depth_limit/3 but instead of returning depth_limit_exceeded it would return a continuation term that could be called with call.

Could this be implemented with shift/1 and reset/3?

Not sure I get this. I guess the whole problem is related to the mathematical concept of countability. If all choice points are finite we have a finite set of solutions that we will nicely cover. If one is infinite it depends on the ordering. If the infinite one is the oldest, all still works fine: the finite ones will be exhausted, advancing the infinite one, etc. If the infinite is not the oldest, the choice points that are older are never explored. So with one infinite ordering can (always?) fix the problem.

With two infinite (your case), this doesn’t work. You can use call_with_depth_limit/3 to make the infinite choices finite and do iterative deepening. As long as you do not use infinite foreign choice points (e.g., repeat), this should work fine. It creates duplicates. Possibly you can solve that by demanding that the reached depth is exactly the limit? Not 100% sure.

Otherwise, I think you are stuck with normal Prolog. My intuition says delimited continuations are probably not an answer as these capture the continuation, but not the (open) choices. Possibly you can restructure to get what you want. Alternatively you can use engines and restructure the program such that each engine satisfies the rules above (at most one infinite choice point that is the oldest) and next you can takes answers from each of them and combine the result.

Finally, you can map your infinite space to another infinite state that is easily counted, i.e., make a mapping from integers to your btrees, count the integers and map each to a btree.

Right, I think you have a great grasp of the problem here.

Exactly, this the thought process I had on chosing call_with_depth_limit/3 (or any other limiting meta-calling).

Here is where I think we are looking at the problem from two different angles. Certainly it creates duplicates, but I don’t really see that as the major problem.
I think the real problem is that it is repeating a computation that was already done, and that is why we get duplicates.

We could simulate an intensive computation time by just simply having

leaf(l(_)) :- sleep(0.3).

Then you could see that the problem is not duplicates, but the fact that the computation is being repeated.

This is why I thought of continuations, because in order to convert the infinite into a finite we could just insert continuations, and hop from continuation to continuation, providing a novel way of iterative depeening which would not repeat the computations.

Then we run into the problem that you mentioned, that
current delimited continuations don’t capture the remaining choices, as you said:

What we really need is a true prolog continuation which captures the whole environment (including open choice points). Then we can move through the (many) infinite series in chunks without repeating any computation. The chunks would be processed by a meta-call that limits the depth (or any other resource) and returns a continuation, so that we can move to the next chunk.

Interestingly, this is kind of what many web searches do.

When a web search gets a very large result set (infinte from a practical point of view), they just offer a paged view of the semi-infinite set (with a limited number of results), and then you just click the continuation (next) to see the next chunk.

They never recompute the previous chunks, that would be a waste of computing resources.

Does this make sense?

Sure, but that Level+1 you will have twice as many leafs than at Level, so the wasted time is just log(N). The big gain is that this run in minimal memory resources. So, even if you can save enough state to avoid the repetitive computation you certainly loose on memory and quite likely loose on time as well as all the state save/restore is quite likely to be more expensive than the additional computation.

Only if the number of answers between the level varies little the overhead will be significant. Nobody says you need to increment the depth by one though, so you can simply increment it by large enough steps.

Iterative deepening is one of these a bit counter intuitive techniques :slight_smile:

2 Likes

Maybe I am missing something here. Let’s see it in code.

leaf(l(_)) :- sleep(0.1).  % Simulates a 100ms computation

btree(T,D) :-
   call_with_depth_limit(btree_(T),D,_).

btree1(T,I) :-
   call_with_inference_limit(btree_(T),I,_).

btree2(T,L) :-
   call_with_time_limit(btree_(T),L).

btree_(bt(A,B)) :-
   (  leaf(A); btree_(A) ),
   (  leaf(B); btree_(B) ).

Query

150 ?- time( (btree(T,5), fail) ).
% 549 inferences, 0.001 CPU in 6.005 seconds (0% CPU, 618338 Lips)

153 ?- time( (btree(T,6), fail) ).
% 14,751 inferences, 0.029 CPU in 164.836 seconds (0% CPU, 511286 Lips)

So you’re saying it is not worth the effort because we would only save less than 5%? I think that’s debatable, but let’s assume we accept it.

( By the way, I couldn’t test with btree1/2 and btree2/2 in the above code, they don’t behave as expected )

My point is dividing the computation space in linear chunks (like a web search results page does: e.g. 100 results at a time), not necessarily depth of recursion (which produces a geometric series, not a linear series).

The continuation would allow us to move between these linear chunks, and the geometric multiplication would not occur. This is why I don’t limit the thinking to iterative deepening, but I just consider any way of turning the infinite into finite (hence btree1 and btree2 in the example above, but I had problems running those).

Am I missing something?

EDIT: To make it a little more clearer, if we could divide the computation in fixed chunks of linear computing resources (e.g. 25 seconds, 1000 inferences, 10 results at a time, etc) we would avoid the geometric multiplication. This is what I meant by a continuation (I was thinking linear in my head, not geometric).

If you have an (in)finite stream as a set of solutions to a non-deterministic query you can use findnsols to chop it into pieces while maintaining the state. If you want to put that aside for a while, for example in a web programming scenario, put the generator in an engine. The engine captures a complete state represented as a set of Prolog stacks and VM registers. These primitives are the underpinning of the Pengines chunk mechanism.

This (IMO) is completely distinct from redundant computation that is the result of iterative deepening.

1 Like

Thanks, I didn’t know about findnsols/[4,5]. At first I thought it would solve the problem; but on thinking about it, I suspected it wouldn’t work because

  • it doesn’t limit the recursion by unwinding the stack
  • it simply takes the backtracked solutions, kind of like a chunked findall. This is what I think you meant when you said that this is very different from iterative deepening.
  • I tested and does what I suspected: simply leaves the first element as a leaf, and the second element grows infinitely (which is the same that happens without using any kind of meta-call limiting)

I still think it is possible to solve the problem by limiting recursion (including stack unwinding) with a metric other than the recursion depth, namely with a linear metric, and using modified continuations.

However, I am happy enough with the call_with_depth_limit/1 solution, I’ll try to wrap it in a simple directive, something like:

:- ideepen(btree/1,[step(1)]).  % iterative deepening with a step of 1

btree(bt(A,B)) :-
   (  leaf(A); btree(A) ),
   (  leaf(B); btree(B) ).

leaf(l(_)).

This will make it very easy to use more declarative code, which was the original goal. I think it is a good enough solution for now.

Look at the wrap_predicate/4 stuff that is also used for tabling. Ideal for this type of declarations!

1 Like

Yes, that’s exactly what I thought! Thanks for making that predicate available! I think it will be useful for many kinds of new calling mechanisms.

For some reason wrap_predicate/4 is not backtracing properly with call_with_depth_limit/3, here is a minimized example:

wrap_test(Pred/Arity) :-
   atom(Pred), integer(Arity), Arity>=0,
   functor(Head,Pred,Arity),
   wrap_predicate(Head, ideepen, OrigPred,
      (   call_with_depth_limit(OrigPred, 3, _)
      )
   ).

leaf(l(_)).

btree(bt(A,B)) :-
   (  leaf(A); btree(A) ),
   (  leaf(B); btree(B) ).

Query:

2 ?- call_with_depth_limit(btree(T),3,_).
T = bt(l(_6322), l(_6326)) ;
T = bt(l(_6322), bt(l(_6332), l(_6336))) ;
T = bt(bt(l(_6328), l(_6332)), l(_6336)) ;
T = bt(bt(l(_6328), l(_6332)), bt(l(_6342), l(_6346))) ;
true.

3 ?- wrap_test(btree/1).
true.

4 ?- btree(T).
T = bt(l(_5938), l(_5942)) ;
true.

5 ?- 

The last call should backtrack also. Am I missing something?

:frowning: The problem is that this also wraps the recursive calls. This is typically what you want, but nor for this case. Seems this is more something that demands for a meta-predicate than modifying the predicate itself.

I have made the meta-predicate already (which is what the wrapper calls), but I wanted to provide a very ergonimic solution, just as we have with tabling, so that I could simply say:

:- iterate_depths PredIndicator
[...]

Just like we do with tabling.

Is there a possibility to provide wrap_predicate with an option to not wrap the recursive calls?

This can be very useful for implementing new and novel calling mechanisms in a very ergonomic way.