Hi Jan, let me check out the ChatGPT-SWI Prolog Assistant when I am back at my laptop next week. My guess is that, if you already did set up a custom interface between LLM and Prolog server, MCP will not add functionality directly.
Main usecase for MCP I see:
Allow any LLM agent/client (that knows MCP) to dynamically access the Prolog MCP server (without having to set up a custom interface) and to “decide” whether the write Prolog code and run it.
I think it would be great to have a publically available Prolog MCP Server, I had not thought of that.
Overall the open question is, I think, whether Prolog can actually add to the reasoning capabilities of an LLM agent. Obviously LLMs themselves are pretty poor in logical reasoning.
That being said, may limited tests (and I believe those of others) on using LLM+Prolog were pretty sobering. Of course the prolog results can only be as good as the Prolog code. In my tests the LLM (openai:gpt-4o) did often very poor in translating a reasoning question into consistent prolog code (clauses + query).
I did not. The ChatGPT-SWI Prolog Assistant is a simple shared gpt that only has some instructions on its role and to focus it on publicly available information on SWI-Prolog as far as possible. I am fairly impressed on how well it answers questions. It isn’t very good at programming, but it seems pretty good and finding build-ins and libraries you need to glue together to get something done. After which it writes sloppy code connecting the ends … Well, my experience with C is not much better.
If you can help improving on the situation, please do. I’m glad to hand over this stuff to anyone more knowledgeable