LLM and Prolog, a Marriage in Heaven?

There are more and more papers of this sort:

Reliable Reasoning Beyond Natural Language
To address this, we propose a neurosymbolic
approach that prompts LLMs to extract and encode
all relevant information from a problem statement as
logical code statements, and then use a logic programming
language (Prolog) to conduct the iterative computations of
explicit deductive reasoning.
[2407.11373] Reliable Reasoning Beyond Natural Language

The future of Prolog is bright?

P.S.: This is also fun:


https://twitter.com/billyuchenlin/status/1814254565128335705

1 Like

A couple of years ago there were a number of papers that used the same approach for planning: they basically translated some natural language instructions to PDDL (the Planning Domain Definition Language, used by all mainstream planners) or, alternatively, Python, then they passed the result to a planner or to a robot’s API.

For example, see the following preprint:

Despite the claims in that paper subsequent work showed severe problems with the approach. See the following for a review of planning using LLMs:

Since the paper you cite is following the same approach, except that it translates reasoning problems to definite programs rather than PDDL and handing off to a Prolog interpreter rather than a planner, I anticipate the same failure modes as with the earlier work- which btw looked like it worked until some experts on planning had a look and pointed out the pressure points that cause the whole effort to collapse.

The problem in general is that LLMs cannot be relied upon to do the translation to a formal language accurately, unless they’ve already seen an accurate translation of what they are called to translate. Translating natural language to a formal language itself requires decision-making that implies understanding of both languages and the domain of discourse and such understanding is absent from LLMs, in novel domains. In other words, the proposed approach might obtain a decent Prolog boilerplate geneator, but it will be brittle and easy to break with simple techniques (like changing the names of symbols as in the obfuscated blockswords domain used to demonstrate the brittleness of LLMs-as-planners).

3 Likes

Another paper that mentions LLMs and Prolog: Virtual Machinations: Using Large Language Models as Neural Computers - ACM Queue
which I found from here: Satnam Singh on LinkedIn: Virtual Machinations: Using Large Language Models as Neural Computers
(also: x.com)
“Groq” is a company that’s developing chips that compete with Nvidia (the person who mentioned it has worked at both Google and Meta, so we can only speculate on where this is going …)
There’s also a derivative of Datalog work at Google: GitHub - EvgSkv/logica: Logica is a logic programming language that compiles to SQL. It runs on Google BigQuery, PostgreSQL and SQLite.

1 Like

Yes, I’ve been experimenting with this approach – use the LLM (gpt-4 and sonnet 3.5 tested) to map English to Prolog, query, then map back from Prolog to English. It takes a bit of badgering to get the LLM to not merely sketch a possible Prolog representation rather than generate complete, runnable code, but the method does work.

Tangentially, we’re finding that the LLMs are quite good at transpiling, even across programming language paradigms. So one can prototype in Prolog, then – if engineering objects to running Prolog in production – transpile to some other programming language, e.g. Typescript.

1 Like

Now I wonder whether LLMs should be an inch more informed
by results from Neuroendocrinology research. I remember
Marvin Minsky publishing his ‘The Society of Mind’:

Introduction to ‘The Society of Mind’
https://www.youtube.com/watch?v=-pb3z2w9gDg

But this made me think about a multi agent systems. Now
with LLMs what about a new connectionist and deep learning
approach. Plus Prolog for the pre frontal cortex (PFC).

But who can write a blue print? Now there is this amazing
guy called Robert M. Sapolsky who recently published
Determined: A Science of Life without Free Will, who

calls consciousness just a hicup. His turtles all the way
down model is a tour de force through an unsettling conclusion:
We may not grasp the precise marriage of nature and nurture

that creates the physics and chemistry at the base of human behavior,
but that doesn’t mean it doesn’t exist. But the pre frontal cortex (PFC)
seems to be still quite brittle and not extremly performant and

quite energy hungry. So Prolog might excell?

Determined: A Science of Life Without Free Will
https://www.amazon.de/dp/0525560998

Its amazing how we are in the mists of new buzzwords
such as superintelligence, superhuman, etc… I used
the term “long inferencing” in one post somewhere

for a combination of LLM with a more capable inferencing,
compared to current LLMs that rather show “short inferencing”.
Then just yesterday its was Strawberry and Orion, as the

next leap by OpenAI. Is the leap getting out of control?
OpenAI wanted to do “Superalignment” but lost a figure head.
Now there is new company which wants to do safety-focused

non-narrow AI. But they chose another name. If I translate
superhuman to German I might end with “Übermensch”,
first used by Nietzsche and later by Hitler and the

Nazi regime. How ironic!