Implementing stacks and queues in Prolog - Depth-first and Breadth-first search

That’s true if the way lists/arrays are implemented stores both the beginning and end as readily lookupable addresses, meaning no difference to the speed of appending whichever way round.

A problem with Prolog and other languages of its vintage – various Lisps and SML which I was introduced to in a MooC on programing languages and explained the problem nicely – is getting the addresss of the end of the list involved starting at beginning and traversing all the way to the last element every time.

This made append a very dangerous statement to blindly use if the first list was thousands if not millions of elements long, as is common in breadth first search. Prolog’s “difference list” covered by O’Keefe in his chapter on breadth-first is a hack whereby the programmer stores the address of the end of the list as a free variable to create a more efficient version of append when the first list is very long.

My view is the language implementation should be doing that under the hood and it’s not something the average coder should have to be hassled with. I’m not sure what SWI-Prolog’s status is, but since it supports indexed lists (not part of vintage Prolog), I’m guessing the implementation is modern.

Do you mean the dict data structure?

No, I’m referring to nth0/3 etc.

Back in the bad old days, one couldn’t simply grab any element in a list, but had to grapple with the horrors of car and cdr in lisp, or Prolog’s historical dot functor. I just stumbled on SWI-Prolog -- Lists are special in the documentation which explains SWI-Prolog does things the modern (since circa 1980) way.

Long story short, I think difference lists are a historical relic pretty academic to modern programing.

I recall I put some notes here SWISH -- SWI-Prolog for SHaring as I grappled to learn SWI-Prolog’s lists/arrays.

BTW today I was trying to go on with chapter 2 when I reached the section where he describes local heuristic ordering and global heuristic ordering… there he introduces a predicate named estimated_distance_to_goal(+Node, -Estimate) which of course he doesn’t take care of explaining what it consists of…so I made some google search and I saw that what I “did” till now is often labelled “uninformed search”, now he gets to “informed search”…but to figure out how to implement it one possibly has to take into account other sources of information… he has something of the sadist I have no more doubts about this

I found a professor’s (or at least a doctor’s) repositories on GitHub, he has apparently taught some courses on these topics in Prolog in some californian university…for today I suspend…I didn’t expect this extra hurdle…tomorrow I’ll take a look at his code

Hmm, this is a bit misleading. nth0/3 has to traverse the list. Even if the ergonomics is better and the implementation somewhat optimized it is still just a linear traversal/search through singly-linked list.

If you really want an “array” you can use a flat term; accessing an element by index is constant time. You cannot grow or shrink this automagically. For this you can use a dict with (consecutive?) integers as keys.

There are other data structures that support key-value pairs in SWI-Prolog. But this topic pops up quite often on this list so searching would be good enough.

It looks a lot like a snake game :rofl: :rofl:

You ended deleting also this one…I want to thank you anyways: that website is a perfect completion to O’Keefe :+1:

You can click on the little “pen” to see the edit history, you can see the original. Not sure if it is there forever.

1 Like

I didn’t know the “trick” of the pen…but I won’t miss the link, I already “worked” a little bit with the code this morning and I’ll go on this afternoon

However if I understood right what O’Keefe writes, since he talks about an estimate of distance_to_goal, in the website’s terminology that would be a case of best-first search rather than Dijkstra’s or A*. Is it correct? At the moment I’m reproducing the code in Python, when I feel that I understood more clearly the algorithm I’ll try to rephrase it in Prolog

“what he calls hallucinating…that is removing items that were never there in the first place”

I would say that it is an interesting phenomenon, maybe useful in some cases. Namely: a queue (a difference list) of negative length!
One removes a future queue item (as a variable, say X). When the item is later on inserted into the queue, X is bound to the item.

I don’t see the usefulness. But maybe some more experienced Prolog people might say it actually is useful. At the moment I’m trying to make some progress with the code at the red blob website

BTW I ended up doing exactly the opposite (at least till now) of what I told I would do: I didn’t translate from Python to Prolog but rather tried hard to bring Prolog depth-first search to Python, which led me to discover some unexpected behaviors in the collections.deque class…but at last I found a way to obtain the original Prolog results sequence of O’Keefe’s book :+1:

PS: I’m not sure that I’m making progress though…actually it looks more like an infinite loop…