Specification and Guide of swi-prolog's Virtual Machine Instructions (VMIs)

Hi all,

To become an advanced SWI Prolog programmer its crucial to understand the SWI Prolog virtual instructions set.

I think its time that this is documented systematically from the minds of the few who have that knowledge.

Can we use this thread – or create an entry somewhere (wiki?) to list and describe:

  • Each Virtual Machine Instructions and group of instructions if they come together.
  • What each causes to happen (its semantics)
  • Upon what runtime structures it relies on – the Virtual Machine runtime “data” architecture

I envision this as a guide as well, so its important to present this with many examples – Prolog code/clauses – generated VMI – , from the simple to the complex.

thank you,

Dan

3 Likes

Wiki: SWI-Prolog Virtual Machine Instructions

Remember that the way wiki topics are set up is that the Wiki topic will be restricted to just the information and all discussions and such are to take place in Wiki Discussion: SWI-Prolog Virtual Machine Instructions.

Discussions and other off topic information in the Wiki topic will be moved.

@jan how helpful do you think is reviewing http://wambook.sourceforge.net/wambook.pdf to getting started in exploring what VMIs in swi-prologs are and how they work.

I started reading it again – and this time, things seem to start to stick – such as the idea of flattening a term, register, compiled instructions, to create a flattened structure on the heap, (that’s how far i got for now p13).

I disagree. I don’t need to understand the x86 instructions set to write good C++ code. (And I don’t know the x86 instruction set – knowing as little as possible about the x86 has been one of my life goals and so far I’ve succeeded; I don’t mind having learned and used more reasonable ISAs such as PDP-11 and PowerPC.)

The first implementations of Prolog didn’t use the WAM and people were able to write code just fine. Also, there were implementations that used structure sharing (most modern implementations use structure copying); again, the two styles didn’t create an issue for writing code.

1 Like

I would venture to say that it much depends on the problem you are trying to solve.

I think the reason why languages such as C++ (and now Rust) exist is because for their problem domain you want to program a machine.

Prolog is certainty more high-level, but if you end up wanting to program the machine – mostly when you want to optmize performance or memory – then the machine is VMI optimizations (e.g. LCO) + indexing + tabling + parallel processes and other such features.

Edit:

Perhaps, its a case in point that I started to look at Prolog idioms through the lens of the VMI to understand where the Prolog implementation may be inefficient and could be improved.

I.e. my programming needs lead me to be unsatisfied with what there currently is, and to seek ways to improve it - or at least suggest improvements.

Agreed. And Bin-Prolog is implemented using continuation-passing – again, no need to understand the implementation in order to write good code.

(Protip: if you’re ever in a Prolog programming contest, try to avoid competing with Paul Tarau.)

Thank you.

Interestingly, i just read a short paper by Paul Tarau, where he derives a new Prolog interpreter from a simple unfold description of terms. Thank you for the additional reference.

I think this is key – the distinction between “what” and “how” is an idealization – a machine with unlimited resources and speed you can focus on the what only.

However, once you introduce memory constraints, performance and other such non-functional requirements (success criteria) you are bound to look into the how.

Eventually, you arrive at the VMIs and look even below – although, practically speaking you go the external predicate route (or change language).

Dan

In my experience it is mostly trying to avoid competing Peter Stuckey :slight_smile:

1 Like

Mildly. I’d start reading https://www.swi-prolog.org/download/publications/A_Portable_Prolog_Compiler.pdf. After all, that is what started SWI-Prolog. Next is listing virtual machine code, see how it is implemented in pl-vmi.c and how the clause indexing works from SWI-Prolog -- Just-in-time clause indexing and, if your brave, from pl-index.c. The backtracking stuff is in PL_next_solution() of pl-wam.c and you’ve covered most of the VM.

That should take most seasoned C programmers a couple of weeks at the max. As several people have argued here, that is probably not worth the trouble if you want to use Prolog. Yes, one has to understand roughly the clause indexing of the implementation you use (the above docs should suffice). Structure sharing implementations are out of fashion these days. They do have impact though. For example, where

 member(X, [a,b,c,d])

may not be too bad in a structure sharing system, in a copying implementation you much better write the code as below as the member/2 solution first pushes [a,b,c,d] onto the stack and than calls the non-indexed member/2 (also for structure sharing). If the member/2 succeeds you most likely leave a choicepoint (also for structure sharing) and eventually GC has to discard the list (not for structure sharing).

valid(a).
valid(b).
valid(c).
valid(d)

and use

 valid(X).

Of course, it isn’t really hard to write macro expansion to do this for you :slight_smile: In most cases I prefer to write the code by hand as it also allows me to invent a good name that makes clear what I want to state about X

1 Like

I secretly (i.e. hidden from myself) harbor the thought to get into the Prolog implementation code – although, right now I should really spend the time on advancing the system I am working on – so, its more for during my “down time”.

I, however, surely do see value longer term to become comfortable with the Prolog implementation and, possibly, also contribute.

Dan

Interesting,

" Database references from one clause to another are permitted." (p3)

What do they mean by that … are these pointers, for faster lookup for linked data structures?

Dan

Right …

So, would a clause lookup with a reference be faster than a clause lookup with an indexed functor and arg, say, when there is a large number of them?

Dan