I agree.
I did not list specific examples because some I noted over two years ago in my series of posts about using ChatGPT with Prolog. Many of those same problems exist today. The only noticeable improvement on some of them was with the OpenAI o1 model, which I currently don’t have a subscription for.
Rather than starting with a specific technology, the focus should be on solving real-world problems. This requires careful consideration of requirements such as:
- Need for web interfaces
- Non-deterministic processing capabilities
- Computational intensity
- Real-time processing requirements (e.g., aviation software)
After that, a search for established technologies that consistently prove valuable should be conducted, such as:
- Python: Offers extensive production-ready libraries with domain expert input
- Prolog: Excels in backtracking, parsing, constraints, and closed-world solutions
- HTML/JavaScript/CSS: Enables versatile interface creation
- Cloud services: Provides reliability and adaptability
- Content Delivery Networks: Supports diverse technological needs
Often, established technologies will be considered first before looking for other technologies. For example, a few days ago, I started working with MiniZinc because there was a need to do constraint solving in a standalone single web page where the web page is not served with a web server.
In other words, starting from real-world problems generates more pertinent questions, and at times, the combination of Prolog and LLMs may come up, making the noted questions relevant. There are many more such questions about possible synergies with Prolog and LLMs that exist.
In keeping tabs on research papers on arXiv, one can find papers related to this. While I check several different sources weekly, one of the more notable ones related to this question is
https://arxiv.org/list/cs.SC/recent
Recent developments in LLM capabilities, particularly in reasoning, are noteworthy. The emergence of concepts like Chain of Continuous Thought (Coconut) (pdf) shows promise. However, the choice between LLMs and Prolog often hinges on whether hallucinations are acceptable. When factual accuracy is crucial, direct LLM usage may be unsuitable, though they can be valuable in:
- Augmenting results with Prolog-based fact-checking
- Generating Prolog code (with mandatory verification)
- Contributing to design-phase ideation
The trend for model makers is to improve the models by giving them reasoning abilities, and they are making progress in this area.
OpenAI o1 model
OpenAI o3 model
What I don’t see people doing is using LLMs during the design phase to ask for ideas. Sometimes an LLM will give a novel idea that is worth investigating. This would include asking how to combine technologies, including Prolog. Often, an LLM will not include Prolog in its answers unless it is suggested or something in the area of logic programming or non-determinism is included.
While there’s apparent demand for ready-to-use interfaces between Prolog and LLMs, several considerations warrant attention:
- The rapidly evolving landscape of coding assistants
- The cost-benefit ratio of implementation
- Potential educational applications, though LLM hallucinations remain a concern
- The need for expertise in prompt engineering
The question I would ask is, “Is the reward worth the effort at this time?”
Important Considerations:
- LLM model performance can fluctuate over time, unlike traditional programming
- API models, while more stable, typically have limited lifespans
- Implementation often requires user accounts or API keys, raising questions about cost and accessibility