Python Prolog MCP server, using SWI Prolog

Hi,

I put together a Python MCP server that connects to a local SWI-Prolog instance, enabling standardized tool integration for agentic workflows.

This allows MCP clients (e.g., LLMs) to:

  • Autodetect Prolog tools (start server, add clauses, run queries)
  • Facilitate bidirectional communication between LLMs and Prolog

Repo: github.com/wendelinism/prolog-mcp-server

Features

  • MCP-compliant: Tools are auto-discoverable by MCP clients
  • Flexible I/O: Runs as an HTTP server or via stdio
  • Dockerized Prolog: Uses swi-prolog in Docker for easy setup

Getting Started

  1. Clone the repo and build locally (see README)
  2. Install SWI-Prolog via Docker
  3. Run either in HTTP mode or stdio for direct piping

Note: Currently tested on Linux only. No PyPI-package (yet)

I hope this to be useful for other people as well, while experimenting on the (agentic) integration of LLMs and Prolog.
Any feedback welcome.

Interesting. I have very little knowledge about MCP. Could we use this to improve ChatGPT - SWI Prolog Assistant?

I’m happy to host such a server on swi-prolog.org. In particular if there is someone willing to maintain and extend it :slight_smile:

Hi Jan, let me check out the ChatGPT-SWI Prolog Assistant when I am back at my laptop next week. My guess is that, if you already did set up a custom interface between LLM and Prolog server, MCP will not add functionality directly.

Main usecase for MCP I see:
Allow any LLM agent/client (that knows MCP) to dynamically access the Prolog MCP server (without having to set up a custom interface) and to “decide” whether the write Prolog code and run it.
I think it would be great to have a publically available Prolog MCP Server, I had not thought of that.

Overall the open question is, I think, whether Prolog can actually add to the reasoning capabilities of an LLM agent. Obviously LLMs themselves are pretty poor in logical reasoning.
That being said, may limited tests (and I believe those of others) on using LLM+Prolog were pretty sobering. Of course the prolog results can only be as good as the Prolog code. In my tests the LLM (openai:gpt-4o) did often very poor in translating a reasoning question into consistent prolog code (clauses + query).

I did not. The ChatGPT-SWI Prolog Assistant is a simple shared gpt that only has some instructions on its role and to focus it on publicly available information on SWI-Prolog as far as possible. I am fairly impressed on how well it answers questions. It isn’t very good at programming, but it seems pretty good and finding build-ins and libraries you need to glue together to get something done. After which it writes sloppy code connecting the ends … Well, my experience with C is not much better.

If you can help improving on the situation, please do. I’m glad to hand over this stuff to anyone more knowledgeable :slight_smile:

I agree it would be nice to allow the ChatGPT-SWI Prolog Assistant to directly run Prolog Code (via an MCP server).

Right now I see at least two things preventing us from doing that:

  1. OpenAI did not roll out MCP support for the ChatGPT interface yet (they started making it available for deep research, though)
  2. Up to now I implemented the prolog-mcp-server for single-users only, so each user is meant to start the server, no isolation from other users is in place.

I will work on #2 and will make use of pengines. I think this way we should be able to run one instance of the server and make it accessible for multiple chatgpt users (once openai support that).

Good plan. We can use the SWISH server as pengine server for a public version.