The similarity between Haskell and Prolog

I’m learning Haskell from the Penn State course available for free here. However, I’m going to solve at least some of the problems in Prolog too, just to get a sense of what the differences between Prolog and Haskell are. What I’m seeing at the moment, is that my code ends up very similar.

I feel that – at least my – Prolog solutions heavily lean on findall, include, exclude, maplist, aggregate_all, foldl and member. These all have clear functional programming parallels. We have currying, functions as first class citizens, and pattern matching: all of these things are also heavily relied on in Haskell.

Can you think of any examples that really draw out the fundamental differences between the programming experiences? E.g. wherein the Prolog solution would be markedly – and favourably – different to the Haskell one just based on the basic features of the language?

CLP(X) integration seems to be a unique advantage for Prolog. n-queens and sudoku solvers in haskell – even those which use solvers – are far more verbose.

Querying over a knowledge base and/or expressing rules. Expressing facts and then querying them is much simpler in Prolog. The concept of the global “database” under which evaluation is carried out is unique to Prolog I think.

I suppose one could derive from the above that thinking about programming as defining a solution space and then progressively constraining it until a solution is reached is natural in Prolog but unnatural in Haskell.

3 Likes

I don’t know Haskell, so I can’t do comparisons, but I agree with what you write here. In my programming experience I’ve never found something similar to what in a book (don’t remember the title) was called “Database Prolog”. I don’t know if SQL has similar features

PS: but also DCG rules are a rarity, I think

Maplist is a higher order predicate. It requires a closure argument,
which is then called via call/n. call/n wasn’t initially in the ISO core standard.
But it arrived in corrigendum 2 nevertheless. You find a couple of pioneer papers

about higher order in Prolog. There are even alternative proposals that
differ from what we find now with call/n. For example there is a Prolog language
extension called “hiord”, and another one “hilog”, which you typically

don’t use in SWI-Prolog:

HiLog: A foundation for higher-order logic programming
https://www.sciencedirect.com/science/article/pii/074310669390039J

Hiord: A Type-Free Higher-Order Logic Programming Language with Predicate Abstraction
https://link.springer.com/chapter/10.1007/978-3-540-30502-6_7

Yes, I should have mentioned lambdas as another similarity heavily used with include/exclude/maplist/forall/foreach and so on. I suspect that it is this compatibility between logic and functional programming which leads to things like Mercury and Picat.

Yes, its a bit of an issue for me when considering using Prolog for potentially long lived projects. It isn’t obvious how to mitigate risks around individual implementations.

Well, it isn’t just lambdas. Its the portability of every library and copy/paste which you may have used during the development of a 20k+ LOC project. I personally feel that portability and vendor independence is something Prolog could use a good story for.

(Nothing of importance)

Almost, but not quite.

For example, maplist(lookup(DB),[key1,key2],Values) expands at run-time to something equivalent to Values=[V1,V2], lookup(DB,key1,V1), lookup(DB,key2,V2)lookup(DB) looks like a curried function (or, rather: predicate) but it’s not – it’s an ordinary term that is interpreted by call/3 as if these clauses existed:

call(Call, A1, A2) :-
    Call =.. [Call0|Args0], 
    append(Args0, [A1,A2], Args), 
    Call2 =.. [Call0|Args],
    call(Call2).
call(lookup(DB,K,V)) :- lookup(DB, K, V).

One advantage of this is that the term can be manipulated (as in the definition for call/3 above) whereas a Haskell first-class function can’t be manipulated. The “trick” of defining call/1 as if it is a list of all possible goals is quite defensible; something similar could be done in Haskell to allow passing arbitrary objects around and then interpreting them. The definition of call/1 also makes it easy to write a meta-circular interpreter.

2 Likes

f you need something advanced, you are practically bound to a single implementation. Open source implementations at least have the advantage that the “vendor” will not raise the price :slight_smile: The “vendor” may of course cease to exist. In open source world this means that key developers leave the project (for whatever reason). They may be replaced. Even if that doesn’t happen though, the software dies slowly. Basic maintenance will happen because people need it. Only new development will slow down or stop, eventually making the system irrelevant. Other implementations are likely willing to fill some of the gap to get new users on board more easily.

And, finally, I’ve been involved in porting several huge applications. That is not easy, but the required resources are typically only a tiny fraction of the development time. If the source system is open source you can port the libraries. Else, you need to reimplement the used libraries, but typically only the things you need.

All this should be pretty manageable.

3 Likes

Some Haskell advocacy is a bit curious to me. In homework 4 exercise 1, the author requires us to make functions more idiomatic. Here is an example:

fun2 :: Integer -> Integer
fun2 1 = 0
fun2 n | even n = n + fun2 (div n 2)
       | otherwise = fun2 (3 * n + 1)

Meanwhile, the idiomatic version I think is supposed to be:

fun2Idiomatic :: Integer -> Integer
fun2Idiomatic =
  sum . filter even . takeWhile (/=1) . iterate choose
  where choose x | even x = div x 2
                 | otherwise = 3*x + 1

It is idiomatic because it uses canned recursion functions and it is more explicit, yet:

  1. It is slower and requires significantly more memory to execute.
  2. It is longer.
  3. Is it clearer?

Meanwhile, I’d write fun2 exactly as the original in Prolog. It strikes me as more logical.

Manageable but is it sellable to management? :slight_smile: I certainly take your point, and I see how an application may so overwhelmingly benefit from Prolog that all other concerns are a rounding error. For example, something like a stock exchange which under the hood might just be a massive rules engine, or an airline booking system for similar reasons.

Less obvious is how to sell it where its usage is an aesthetic preference or makes a less monumental difference. The obvious routes are JVM implementations, or language specific embeddable libraries, since they do not require changing eco-system. Yet, so far as I can see, all implementations with this focus are dead.

I’ve worked in and tried many languages – Prolog is my favourite to date. If I was a Java dev, and I could choose to write JARs in JProlog: I would, yet clearly others don’t. Think orchestrating DAG pipelines for data processing: its such an obvious fit for Prolog, and horrendous in Java. Think Apache Spark scripts to execute big data ETL and how much sense specifying it in Prolog would make vs pySpark or SparkQL equivalents. Yet here we are. For whatever reason, the world is just not ready.

EDIT: Another – for me – clear use case for a subset of Prolog is as a replacement for SQL. I work as a data scientist and we often write huge amounts of SQL. Since SQL is not composable, the best you can do is make views for some common things, but you cant recycle it as if it was code. You can’t make libraries, compose parts and so on. Prolog here – at least as a translator to SQL – makes possible an order of magnitude more organisation to query code. I’ve tried to use macro preprocessors for this which is similar to what commercial systems like DBT do, but it barely scratches the surface. Think libraries and DSLs for your queries.

2 Likes

Good points

JVM is of course nice in a Java based eco system, but otherwise mostly makes the system slower. The new route to go for this seems to be WASM (Web Assembly). It is not really there yet. 64 bit support is not yet usable. Multi-threading is poor. For SWI-Prolog it is also a lot slower (recent tests show about 5 times). For Ciao Prolog the difference seems much smaller. I have no clue why, nor whether that will improve over time or can be fixed by minor changes to the implementation.

I have been involved in several projects using Prolog this way, either using relational data or linked data (RDF). Commercial users also use this route, some compiling to SQL.

P.s. There is ongoing work introducing PIPs: Prolog Improvement Proposals. We are still shaping the process and the definition of a PIP. The first one is approaching completion: the Janus interface to Python. This will result in a public description of the interface and open source implementations for SWI-Prolog and XSB while the process was followed by the Ciao team. The hope is that other Prolog systems that wish to add a Python interface will seriously consider this instead of inventing something from scratch.

2 Likes

From SSU page of the SWI manual:

In practice though, Prolog is both a logic programming language and a language for expressing computations in a near procedural style. The first is used to solve (notably) combinatorial problems while the latter is used for I/O, data transformation and the many non-logical operations that are involved in many applications.

As we have seen from these examples, writing procedural code in Prolog requires us to follow the two basic principles below. Both principles have been properly described in The Craft of Prolog O’Keefe, 1990.

  • Structure every clause as Head :- Guard, !, Body. Every clause has the cut as early as possible. Guard can be empty. The last clause often does not need a cut.
  • Avoid that the head unification binds values in the goal term. We see this may lead to undesirable results such as sum_list(L,S) binding L to‘ and S to‘0 as well as loss of >steadfastness, causing max(5,2,2) to succeed. The first requires additional var/1 or nonvar/1 tests. The second requires delaying unification until after the cut.

Picat provides the =>/2 alternative for the Prolog neck (:-/2) to force the above practices.

Since you’re going a slightly more philosophical direction this time I wouldn’t forget Gottlob Frege. Relation, function, functional application, he defined some of the core concepts. Many of the successive developments in formal semantics were simply technical refinements of his basic conceptions (if these may be held for satisfactory or not in linguistics, for example, is an open issue IMO).

I mean, they explain very well certain aspects of language and of meaning of course, but they have some problems with other aspects. Maybe like programming languages: one is good for a certain task, not maybe for another

Is F a special symbol representing the function? Looks noisy to me. E.g. fun2(1) = F => F = 0. I think that fun(2) = 0 would have been nicer.

Are you referring to propositional logic? Aristotle is also credited with having invented taxonomy even as practiced today, so I guess that should count as “relational”.

Aristotles categories (e.g. taxonomy) have “arguments”. I.e. instances of a specie belong to said specie, which may in turn be a sub-specie and so on. Further, I don’t think that syllogistic (Aristotelian) logic is typically classified as zero-order since it does distinguish between propositions and their structure.

In Haskell, they call unevaluated expression “thunks” and everything starts out that way. Thunks are evaluated only as much as necessary for pattern matching: pattern matching drives Haskell evaluation. This “lazy” pattern matching driven evaluation is rather a lot like unification. What’s missing to complete the analogy is backtracking.

A further curiousity is that when Haskell is evaluating a program it will perform a “graph reduction” in attempt to only evaluate sub-expressions once – kind of like tabling by default.

1 Like

But Haskell is notoriously slow. Not only the compiler.
But also the runtime. Take this example:

/* David Turner's (SASL Language Manual, 1983) */
% modfilt(+Closure, +Integer, -Integer, -Closure)
modfilt(C, M, X, E) :- call(C, Y, D), modfilt2(D, M, Y, X, E).

modfilt2(D, M, Y, X, E) :- Y mod M =:= 0, !, modfilt(D, M, X, E).
modfilt2(D, M, X, X, modfilt(D,M)).

% primes(+Closure, -Integer, -Closure) 
primes(C, X, primes(modfilt(D,X))) :- call(C, X, D).

How fast does Haskell perform the exact same lazy algorithm?

/* SWI-Prolog 9.1.21 */
?- time(take(1000,primes(iter(succ,2)),_)).
% 1,566,984 inferences, 0.125 CPU in 0.118 seconds (106% CPU, 12535872 Lips)
true.

?- take(1000,primes(iter(succ,2)),L), write(L), nl.
[2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97,
101,103,107,109,113,127,131,137,139,149,151,157,163,167,173,179,181,
Etc..

Edit 15.01.2023
It might be still the case that Haskell is fast for the above example. But you
find some discussion also in this forum, where we found a few examples, where
Prolog was faster. Sometimes it has to do that Prolog is optimized for backtracking.

If I have time I will do the benchmark myself, since the Haskell code is already here:

primesT = sieve [2..]
          where
          sieve (p:xs) = p : sieve [x | x <- xs, rem x p > 0]

https://wiki.haskell.org/Prime_numbers#Turner.27s_sieve_-_Trial_division

But need to first find a machine that has Haskell and Prolog installed.