Swi-Prolog and Erlang -- tightly integrated?

Hi all,

I have been reading up a bit on Erlang and its approach to writing highly distributed algorithms via lightweight processes and message passing.

I also recall the Prolog in Erlang style discussions a while back and the proposal suggested by @torbjorn.lager.

In practice can a highly performant system be envisioned where, Erlang is used at a low level via an FFI – i.e. tightly integrated – and Prolog for higher level symbolic computations.

I guess technically one would then run two VMs which seems like much overhead , for a combined system.

I am also wondering – what overhead does Erlang / OTP really have in its offering when compared to Prolog written in an Erlang style.

Any thoughts are much appreciated.

Dan

I doubt there is a sensible answer about the general picture. There probably are answers in concrete scenarios where we know the task, available hardware and requirements for e.g., performance and fault tolerance.

What is “Prolog written in an Erlang style”? What does it look like? Why do you expect overhead?

Hi Boris,

As a brief intro i looked at some simple code presented by Jesse Anderson [1], i wonder about the following:

  1. overhead of message passing between processes – its probably copying of messages and there is probably some queuing – also what is the message passing technology – IPC?. TCP/IP?, shared memory ?
  2. last call optimization – since the loop is in fact a recursion
  3. argument binding of state across function calls that implement a looping process

I am wondering how fine grained processes can be before the overhead of “light” weight processes outweights (no pun intended) to parallism provided.

In the talk Jesse also indicates that in practice one would really use a GenServer package which makes all this more robust – but, i am wondering, at what processing cost.

Currently, i am implementing an algorithm in Prolog as a recursive call that traverses a graph – and computes properties – there is little parallelism implemented – but, it seems, much more could be done.

I am itching to try doing this in Erlang but wonder how much overhead, such as GenServer would have – ultimately, i guess, i may need to benchmerk each – which would be quite an effort.

And, i wonder how similar parallism could be implemented in Prolog at same or lower computational cost.

Dan

image

[1] The ABCs of OTP - Jesse J. Anderson - YouTube

One of Elang’s founders, Robert Virding, has written a Prolog that runs on OTP called Erlog. When I tried to use it a couple of years ago, it hadn’t been updated in a while and wouldn’t work on the current version of OTP, but I see an updated version of Erlog has been uploaded to github, but I haven’t tried it personally.

The genius of Erlang was to simplify inter process communication, using the same syntax for parallel computing on a single machine and distributed nodes between several machines. This hides all the complexity of message queues, sockets etc from application developers. This in turn makes “scalability” easy in that the same code without modification can run any number of machines, automatically getting faster as the hardware grows.

But a big snag I’ve found with Erlang is that contrary to Ericsson’s marketing, there is little respect for backward compatibility. Doing common things like web serving, Json parsing, “conventional” datetime parsing… all require third-party libraries, which all broke in my project when my Linux distribution upgraded OTP from version 22 to 23 .

In a nutshell, Erlang is an extremely rich source of great ideas, many of which like list comprehensions have been “borrowed” by Python etc. I’d never encountered things like supervision trees before, which have come to Linux as systemd. But sadly, Erlang’s implementation and documentation leave a lot to be desired.

FYI, if anyone is interested, here’s my (now old) paper about Web Prolog - the Prolog/Erlang combo I envisioned:

https://gup.ub.gu.se/file/207827

I started trying to write a book about Web Prolog, but it’s on hold until I get time to continue working on it. Should anyone express interest in it, I wouldn’t mind sharing the draft manuscript.

2 Likes

Hi Boris,

I now notice that i did not answer your question.

I gather than an Erlang style is to design a computation as a performed by many independent processes that communicate with messaging.

Technically, to support such a capability would require the kind of light weight process scheduling (and memory allocation) and process enactment infrastructure provided by the Erlang runtime, such as provided by OTP.

Prolog supports threads and engines – and, i guess both threads and engines are isolated, but, i guess, threads don’t have OTP like schedulers and engines are much more heavy weight.

I guess this means that a prolog based algorithm can not be implemented in swi prolog as many “micro” actors, performing some inference within a broad inference task, while asyncrhonously communicating between them.

re: let it fail

I also wonder about the let it fail idea – which on one hand i still need to get my head around, but on the other hand is compelling as well.

I recall a workshop I attended by Jim Coplien of the pattern fame – where he explained that in telco systems – failure and restart, say in call control, is often better than recovery which also leads to a more fair resource allocation approach and simpler code.

Dan

My understanding of Joe Armstrong’s “let if fail” philosophy was to encourage developers to put more emphasis on logging errors to iron out flaws. An advantage of building a system out of independent processes is that part of it can fail (hopefully explaining why in the log) without crashing everything.

I learned Erlang via Functional Programming in Erlang - Online Course and https://www.futurelearn.com/courses/concurrent-programming-erlang

My notes from the above MooCs, which include a simplified version of OTP’s core functions, are at erlang_mooc/doc at master · roblaing/erlang_mooc · GitHub

I find it amusing that Erlang was selected as a replacement for Haskel to teach functional programming, while the man who coined the phrase object-oriented programming, Alan Kay, said he regards Erlang as the closest to what he envisioned OOP to be. It’s hard to understand why Erlang isn’t more popular if it’s considered the best example of both contemporary programing paradigms.

2 Likes

In one talk, the kind of errors were classified into a matrix of core and auxiliary and frequent and seldom (my words, i don’t recall the terms used in the talk).

“let fail” especially addresses seldom bugs in core and auxiliary code – these are the edge cases not much traveled and not well tested.

The assumption is that such bugs can’t be avoided and the overall system should not be affected by them – in these cases its often sensible (in telco for example) to just let it fail and restart the failed process.

Dan

Thank you for the additional thoughts …

Consider the following – an algorithm that searches for a path to a particular node in a graph as follows [1]:

Each node in the graph has a corresponding (Erlang) process, knows its own node, and the process id’s of its outgoing nodes.

Every node process sends its own node to its outgoing node as well as the list of nodes it has received and the target node, sought.

Suppose the graph is in memory, but could also be stored across machines.

Could a swi prolog based solution like this be efficiently implemented – assuming a graph with many million of nodes …

[1] Modelling graphs with processes in Erlang - TechRepublic

Right … but, this would only simulate a multi agent system – but, in Erlang agents are (by Erlang spec) completely isolated and communicating via messaging only.

Now i am wondering if memory is isolated and decentralized as well within the Erlang runtime – or whether, storage is one location of failure for all processes running on the same machine / memory.

I don’t know – having the message boxes essentially running of the Prolog KB with all agents accessing the KB concurrently seems to me one centralized point of failure.

What if the KB runs out of memory – all processes would halt.

Is this the behavior in Erlang as well.

Back to square one, the closest to get to many agents where each agent executes an algorithm in SWI-Prolog is by using engines. You can schedule these to do their next step of computation if some input for them is ready. You can do this scheduling using any number of threads. Engines are fairly cheap (a couple of K-bytes plus the stack space they use). The scalability is still somewhat limited by the fact that notably the different global garbage collectors have to visit
all engines. If they only communicate using messages it is fairly easy to distribute them over different nodes (Prolog processes on different machines). Otherwise engines on the same process can share state. Engines in different processes require some data sharing.

The engines+threads idea was at the basis of the Erlang-in-Prolog work by @torbjorn.lager

I think the key selling point for Erlang is its concurrency programming model that is built-in into its VM (BEAM) and supported by language primitives.

This is in my mind what Ericson innovated based on their concurrency, fault tolerance and efficiency need in telco (many concurrent calls) …

This is in my mind the key architectural distinction between Erlang and other languages, such as Prolog, Java, JavaScript, etc – other distinctions (e.g. backtracking) not withstanding.

Would the use of engines offer the same architectural properties of concurrency, fault tolerance, performance and resource efficiency.

Not only. They can also keep state in Prolog variables as well as maintain the current control flow, both forward and wrt. backtracking. They are in some ways related to delimited continuations. Paul Tarau claims you could use them to implement tabling. I think he is right. Might have some nice properties. Surely they are more performant than delimited continuations when it comes to (re)starting and they have no issues with committed choice points. Engines do require more memory than delimited continuations.

They may have something to do with fault tolerance in the sense that we can discard an engine without further consequences to the process. So, if we split the computations into small tasks we may use engines to do the small tasks and loose not too much should something go wrong.

All these things are just building blocks though. You need something on top of it that provides the properties you want.

All in all, I guess you first need to figure out what task you want to implement, what characteristics the system needs (scaling, fault tolerance, …), what hardware you can throw at it and which (if any) distributed algorithm applies best. Now you can implement it in any Turing complete programming language. In some it is less work than in others :slight_smile:

Distributing work is hard though. In any case, it involves copying data. Quite often it involves wasted effort as the result of one task invalidates the result of another. You may also end up with idle compute units because the input data is not ready or no work for them. You need to find the right granularity. Then the algorithms are typically a lot harder :frowning: There are great success stories next to many (often untold) failures …

A quick scan of the abstract suggests they are talking about independent communicating (C-)Prolog processes. That is quite different from what (in what I recall) was started by Paul Tarau as fluents. I’ve tried to make C-Prolog capable of doing Prolog → C → Prolog → C … call chains. There was so much global state spread all over the code that I eventually gave up (and started SWI-Prolog). It would have been a nasty job to create engines in C-Prolog …

Erlang also has the supervisor hierarchy – i guess, this needs to be implemented in some “chained” way – one engine creating another engine while keeping track of the engine’s id internally, and handing the created engines execution state.

Another matter might be the need to destroy all engines, if one engine solved the problem.

Dan

In this talk [1] Robert Virding overviews internals of BEAM … how scheduling works, memory allocation, etc.

Would be very interested to see how you would put together an Erlang style algorithm using engines as the light weight process as well as Engines as linked supervisors.

Intuitively, I am thinking that there could be two styles in prolog along the :- and => (SSU) line of thinking:

  1. close to Erlang functional view where the problem would lends itself to a functional deterministic decomposition – and engines essentially capturing deterministic functions

  2. Parallel Prolog with a deconstruct of the problem into parallel non-deterministic goals each (manually, by design) allocated to an engine…

[1] Robert Virding - Hitchhiker's Tour of the BEAM - YouTube

Thank you,

I will need to mull over what you suggest to get my head around it.

I guess the scheduler could use the concurrent predicates to run on multiple threads for load balancing the scheduling whist the schedulers handle / schedule engines to process their respective incoming messages.

Multiple engines would run – asynchronously – and an abstraction for their messaging mailbox need to be implemented – perhaps in form of asserts as you have indicated earlier.

Each engine would be isolated in the sense that it has its own runtime stacks and even local KBs.

Schedulers could even handle let it fail by handing exceptions and offering opportunities for cleanup to supervisor engines.

Edit:

I am thinking that with websockets all this could also be extended to run on multiple machines – although there would also be some code distribution across machines going on – which, i guess happens in BEAM / OTP as well.

Dan