Depth first search not the default?

I was playing around using transitive closures (at least my definition of it) and got better than expected results which puzzle me a bit. I’ll start with a diagram of the graph before showing the code:

Graph

This is a game tree in which red circles are a loss, blue victory, and black draw.

My Prolog version of the above looks like so:

move(a, 1, b).
move(a, 2, c).
move(a, 3, d).
move(b, 4, e).
move(b, 5, f).
% move(b, 9, a). % cycle
move(c, 4, g).
move(c, 5, h).
move(d, 4, i).
move(d, 5, j).
move(e, 6, k).
move(g, 6, l).
move(i, 7, m).
move(j, 8, n).

win(k).
win(n).
draw(l).
draw(m).
lose(f).
lose(h).

transitive_closure(Start, Path) :-
  transitive_closure_(Start, [], RPath),
  reverse(RPath, Path).
  
transitive_closure_(S0, Ms, [lose,M|Ms]) :-
  move(S0, M, S1),
  lose(S1), !.

transitive_closure_(S0, Ms, [draw,M|Ms]) :-
  move(S0, M, S1),
  draw(S1), !.

transitive_closure_(S0, Ms, [win,M|Ms]) :-
  move(S0, M, S1),
  win(S1), !.

transitive_closure_(S0, Ms, Acc) :-
  move(S0, M, S1),
  transitive_closure_(S1, [M|Ms], Acc).

Running transitive_closure(a, Path) yields:

?- transitive_closure(a, Path).
Path = [1, 5, lose] ;
Path = [2, 5, lose] ;
Path = [3, 4, 7, draw] ;
Path = [3, 5, 8, win].

This surprised me, since I though Prolog would default to a depth first search, in which case the first result would be Path = [1, 4, 6, win].

I also tried to hang the above code by making the game tree look like this:

Cycle

by uncommenting move(b, 9, a)., but it still gave the correct solution.

Is SWI Prolog’s default search breadth first or iterative deepening rather than depth first? (It’s nice that’s it’s smarter than expected, but I’m confused as to why).

The search order is based upon the ordering of your predicates. Prolog tries to satisfy predicates in the order they appear in the code. Your first 3 predicates are testing for a finish state and its only the last predicate that increases the depth of your move tree. Hence why you are seeing results with minimal path lengths first. If you instead move your final transitive_closure_ predicate above the others, you will see that the first result is [1, 4, 6, win].

Ask to your second question about why keeping the move(b,9,a) doesn’t cause an infinite loop, it is again due to predicate ordering, and the use of cuts! When your traversal is at b, it looks for a move that ends in a final state (it finds 5 for the loss state), the cut then prunes looking at all other paths from b (which includes 9, for your infinite loop, and 4 which would lead to a win state). Were you not surprised your program didn’t show the path [1,4,6, win]?

2 Likes

In my understanding which is a bit naive since I never learned this language properly (I only had a class 20 years ago from a teacher who never learned this language properly either), the code searches for a solution with “lose” first, since this is the first rule for transitive_closure_.

Unsure if the cuts are helpful here. They don’t seem to respect 1. Test, 2. Cut, 3. Unify result.

If I use an explicit depth first search as follows:

connected([Parent|Parents], [Child|Children], Moves, Acc) :-
  move(Parent, Move, Child), !,
  connected(Parents, [Parent, Child|Children], [Move|Moves], Acc).

connected([_|Parents], Children, Moves, Acc) :-
  connected(Parents, Children, Moves, Acc).

connected([], _, Acc, Acc).

% Deapth First

depthfirst(Start, [R|Path]) :-
  depthfirst_([Start], [], [R, L|ReversedPath]),
  connected(ReversedPath, [L], [], Path).
  
% depthfirst_([], ReversedPath, ReversedPath).

depthfirst_([State|_], ReversedPath, [lose, State|ReversedPath]) :-
  lose(State).
  
depthfirst_([State|_], ReversedPath, [draw, State|ReversedPath]) :-
  draw(State).

depthfirst_([State|_], ReversedPath, [win, State|ReversedPath]) :-
  win(State).

depthfirst_([Parent|Frontier0], ReversedPath, Acc) :-
  getchildren(Parent, Children),
  append(Children, Frontier0, Frontier1),
  depthfirst_(Frontier1, [Parent|ReversedPath], Acc).

I’d get the following results:

?- depthfirst(a, Path).
Path = [win, 1, 4, 6] ;
Path = [lose, 1, 5] ;
Path = [draw, 2, 4, 6] ;
Path = [lose, 2, 5] ;
Path = [draw, 3, 4, 7] ;
Path = [win, 3, 5, 8] ;
false.

The default search does a better job in that it sees moves 1 and 2 are a waste of time since they get to lose states first, which is why I’m surprised the default search figures that out automagically.

I experimented with changing the order of the lose, draw, and win base predicates, and it makes no difference to the results.

The cuts are good because they prune bad moves 1 and 2 early. It just surprises me the default search knows to do that automagically.

Removing the cuts yields:

Path = [1, 5, lose] ;
Path = [1, 4, 6, win] ;
Path = [2, 5, lose] ;
Path = [2, 4, 6, draw] ;
Path = [3, 4, 7, draw] ;
Path = [3, 5, 8, win] ;
false.

Which isn’t the same order as either depth or breadth first, so I’m guessing there’s some kine of iterative deepening happening under the hood.

BTW, if you want full transitive closure, Richard O’Keefe’s book has an implementation of the Floyd-Warshall algorithm for transitive closure - O(N^3). According to Wikipedia, there are map-reduce implementations, so it might be that SWI-Prolog’s threading can be used to speed things up.

1 Like

What is the meaning of the numbers on the edges?

What is your definition of a transitive closure? How does it compare to the more traditional definition?

There is definitely no magic iterative deepening.


PS: here is a question. What will happen if you shuffle the order of the move/3 facts?

Another question: why is it that you get 4 solutions, while your graph seems to suggest that there are 6 solutions (two wins, two losses, two draws)? Where is your first expected “win”, Path = [1, 4, 6, win]? Does that have something to do with your cuts?

And just to make this one thing clear: it is a depth-first search of the solution space. Is it possible that you are confusing the search space with the particular directed graph you have there?

To explain the abstract state diagram above better, it represents this tic tac toe game:

Tic Tac Toe

With X to go, the AI player needs to quickly eliminate the left and middle choices (which I’ve labeled 1 and 2).

I’ve made the order of move/3 facts in the clausal store the same as using sort/2 in the explicit depth and breadth first searches to look at the difference with the default search.

Threading definitely can speed things up for problems like this, and I’m experimenting with it. What I don’t seem to have made clear is I’m intrigued at how the default search in SWI Prolog works (I call it transitive closure, which may be incorrect, but don’t want this to digress into arguments over semantics).

Terminology is only important when you talk to other people who have some pre-conceptions of the meaning of the words you are using. To put it differently, if you throw something like “transitive closure” out there, then someone might think you are talking about a transitive closure.

The “default search” of Prolog is not strictly about how you would traverse a graph represented as a collection of edges.

The default depth-first search of the solution space is about the traversal of the solution space. You define it by ordering your clauses within a predicate; and by ordering the subgoals within a predicate body.


I am sorry, but do not understand what you mean :frowning: which “explicit” searches do you mean, and how can something sort the same way for both? I need help understanding.

I’ve put the code for depth first search above. Breadth first is identical except append(Children, Frontier0, Frontier1) changes to append(Frontier0, Children, Frontier1)

The different results (without cuts) are as follows

?- depthfirst(a, Path).
Path = [win, 1, 4, 6] ;
Path = [lose, 1, 5] ;
Path = [draw, 2, 4, 6] ;
Path = [lose, 2, 5] ;
Path = [draw, 3, 4, 7] ;
Path = [win, 3, 5, 8] ;
false.

?- breadthfirst(a, Path).
Path = [lose, 1, 5] ;
Path = [lose, 2, 5] ;
Path = [win, 1, 4, 6] ;
Path = [draw, 2, 4, 6] ;
Path = [draw, 3, 4, 7] ;
Path = [win, 3, 5, 8] ;
false.

What intrigues me is that the default search doesn’t produce either of these.

Regarding jargon, my favourite example was a youtube interview I saw with John Backus in in which he said he thought he’d used Noam Chomsky’s theories to create BNF notation. Later when he met Chomsky, the old curmudgeon told him he’d entirely misunderstood everything. (My cynical view is a lot of Chomsky’s stuff is simply meant to be incomprehensible, and whatever meaning you put to it, the creator will dispute).

I think the term transitive closure was coined by Stephen Kleene, not Chomsky. But either way, I doubt the originator of the term would agree with what programmers think it is.

Out of curiosity, why are you are collecting those edge labels instead of the nodes along the path? It is a bit confusing because I still don’t understand the meaning of the numbers, and because they are not unique.

What is “the default search” now?

The edge labels are what automatons are all about. I think the origin of the term “string” (which has grown to be synonymous with “text”) is a list of labels showing the path through an automaton.

Automatons are at the heart of programming language interpreters and many other types of software, so I think it a pity there’s a tradition in Prolog textbooks to focus on relation(Parent, Child) tuples which I can’t really see many practical uses for, while not going into automaton triples arc(Parent, Label, Child) for which there is a vast amount of applications.

Regarding the labels not being unique: that’s part of the basics of automatons. In this case, I’ve used 1 as shorthand for X marking the left column, middle row, 2 for X marking the right column middle row…

Each node has to be unique, but labels can appear any number of times in the graph. For Deterministic Finite Automatons (DFAs), there can’t be ambiguity over where a given input symbol leads to, so no duplicate labels from a given node. In natural languages, there tends to be plenty of ambiguity — the word spring for example can be several types of noun as in a season, mechanical device, place underground water comes to the surface… or adjective, verb… — leading to NFAs which need to be converted to DFAs by making the nodes sets of states.

Talking about determinism vs nondeterminism, it’s one of my pet hates in Prolog jargon where it’s used for “alternative answers” and has no clear relation to its original meaning.

Hmm. So each node in the graph you have is a state of the tic-tac-toe game, and the edges are the move taken by the X player.

Everything else aside, did you understand why you get answers in a particular order? Did you understand the difference between problem space and the particular graph that you are trying to traverse? Do you understand how the order of the clauses within a predicate and the goals within the body of a rule define the problem space?

I think that examining the few first lines of trace should clear your doubt about Prolog search order.

?- leash(-all),trace.
true.

[trace]  ?- transitive_closure(a, Path).
   Call: (10) transitive_closure(a, _8280)
   Call: (11) transitive_closure_(a, [], _8714)
   Call: (12) move(a, _8770, _8774)
   Exit: (12) move(a, 1, b)
   Call: (12) lose(b)
   Fail: (12) lose(b)
   Redo: (12) move(a, _8770, _8774)
   Exit: (12) move(a, 2, c)
...
   Call: (13) lose(e)
   Fail: (13) lose(e)
   Redo: (13) move(b, _10754, _10758)
   Exit: (13) move(b, 5, f)
   Call: (13) lose(f)
   Exit: (13) lose(f)
   Exit: (12) transitive_closure_(b, [1], [lose, 5, 1])
   Exit: (11) transitive_closure_(a, [], [lose, 5, 1])
   Call: (11) lists:reverse([lose, 5, 1], _8280)
   Exit: (11) lists:reverse([lose, 5, 1], [1, 5, lose])
   Exit: (10) transitive_closure(a, [1, 5, lose])
Path = [1, 5, lose] .
... 

I would rewrite your code like

transitive_closure_(S0, Ms, [T|Ms]) :-
  member(T,[lose,draw,win]), call(T,S0), !.
transitive_closure_(S0, Ms, Acc) :-
  move(S0, M, S1),
  transitive_closure_(S1, [M|Ms], Acc).

and now

[trace]  ?- transitive_closure(a, Path).
   Call: (10) transitive_closure(a, _33026)
   Call: (11) transitive_closure_(a, [], _33456)
   Call: (12) lists:member(_33506, [lose, draw, win])
   Exit: (12) lists:member(lose, [lose, draw, win])
...
   Call: (11) lists:reverse([win, 6, 4, 1], _33026)
   Exit: (11) lists:reverse([win, 6, 4, 1], [1, 4, 6, win])
   Exit: (10) transitive_closure(a, [1, 4, 6, win])
Path = [1, 4, 6, win] ;
   Redo: (13) move(b, _34896, _34898)
   Exit: (13) move(b, 5, f)
   Call: (13) transitive_closure_(f, [5, 1], _33456)
   Call: (14) lists:member(_37818, [lose, draw, win])
   Exit: (14) lists:member(lose, [lose, draw, win])
   Call: (14) lose(f)
   Exit: (14) lose(f)
   Exit: (13) transitive_closure_(f, [5, 1], [lose, 5, 1])
   Exit: (12) transitive_closure_(b, [1], [lose, 5, 1])
   Exit: (11) transitive_closure_(a, [], [lose, 5, 1])
   Call: (11) lists:reverse([lose, 5, 1], _33026)
   Exit: (11) lists:reverse([lose, 5, 1], [1, 5, lose])
   Exit: (10) transitive_closure(a, [1, 5, lose])
Path = [1, 5, lose] 
...

HTH

Thanks Carlo

For anyone interested, I’ve put notes on my “using Prolog for game trees” (which is work in progress, so changing a lot) at Game Trees - Frontier Software mainly to learn documentation generation system Hugo.

My current search looks like this:

:- use_module(library(thread)).

getchildren(Parent, Children) :-
  findall(Child, move(Parent, _ , Child), Unsorted),
  sort(Unsorted, Children).

connected([Parent|Parents], [Child|Children], Moves, Acc) :-
  move(Parent, Move, Child), !,
  connected(Parents, [Parent, Child|Children], [Move|Moves], Acc).

connected([_|Parents], Children, Moves, Acc) :-
  connected(Parents, Children, Moves, Acc).

connected([], _, Acc, Acc).

depthlimited(Limit, Start, [R|Path]) :-
  depthlimited_(Limit, [depth(0, Start)], [], [R, L|ReversedPath]),
  connected(ReversedPath, [L], [], Path).

depthlimited_(_, [depth(_, State)|_], ReversedPath, [lose, State|ReversedPath]) :-
  lose(State).

depthlimited_(_, [depth(_, State)|_], ReversedPath, [draw, State|ReversedPath]) :-
  draw(State).

depthlimited_(_, [depth(_, State)|_], ReversedPath, [win, State|ReversedPath]) :-
  win(State).

depthlimited_(Limit, [depth(Limit, _)|Frontier], ReversedPath, Acc) :-
  depthlimited_(Limit, Frontier, ReversedPath, Acc).

depthlimited_(Limit, [depth(N, Parent)|Frontier0], ReversedPath, Acc) :-
  Limit > N,
  succ(N, M),
  getchildren(Parent, Children0),
  findall(depth(M, Child), member(Child, Children0), Children1),
  append(Children1, Frontier0, Frontier1),
  depthlimited_(Limit, Frontier1, [Parent|ReversedPath], Acc).

iterative_deepening(Start, Path) :-
  iterative_deepening_(0, Start, Path).

iterative_deepening_(N, Start, Path) :-
  depthlimited(N, Start, Path).

iterative_deepening_(N, Start, Path) :-
  \+depthlimited(N, Start, Path),
  succ(N, M),
  iterative_deepening_(M, Start, Path).

filtermoves([], [], Acc, Acc).
filtermoves([_|Moves], [[lose|_]|Nexts], Filtered, Acc) :- !,
  filtermoves(Moves, Nexts, Filtered, Acc).
filtermoves([Move|Moves], [_|Nexts], Filtered, Acc) :-
  filtermoves(Moves, Nexts, [Move|Filtered], Acc).

bestmove(State, Move) :-
  findall(Next, move(State, _, Next), Nexts),
  concurrent_maplist(iterative_deepening, Nexts, Paths),
  findall(Opening, move(State, Opening, _), Openings),
  filtermoves(Openings, Paths, [], [Move|_]).

Something that puzzles me is whether this could be done in fewer lines and faster using SWI Prolog’s hardwired searching.

I’ve simplified the abstract graph to leave the “whose turn is it?” complexity for later. What surprises me is that the default search (as in not managing lists explicitly, but letting Prolog do it under the hood) is it finds the higher up losing nodes as breadth first search would do.

This is because of the order of your move/3 facts and the order of the clauses of your transitive_closure_.

The idea is to write a general game player for Stanford University’s http://ggp.stanford.edu/ system.

This is my umpteenth version which I’m keeping notes as I go along at Game Description Language - Frontier Software and I’ll get to adversarial search soon.

The reason I started with a very simple abstract game tree was to strip the problem down to its very basics, leading to interesting discoveries on graph traversal which isn’t really relevant.