With no account, all I see is a transcript of your conversation.
With this one, I see you have a conversation, and with my basic account, I can continue the chat with ChatGPT.
It seems to me this sort of assistant is trivial to set up, and of trivial value. I can just as easily ask the questions myself and get the same result, so it’s not clear to me how “ChatGPT SWI-Prolog assistant” goes beyond what anybody could easily achieve without having the assistant.
I’m sincerely asking if I’m missing the point.
Yes this is true; you are getting it correct.
For those familiar with SWI-Prolog and prompting with LLMs this is trivial. For someone not familiar with SWI-Prolog and/or LLM prompting having the SWI-Prolog GPT could be the difference between getting their first assignment with Prolog coding completed or not.
For me this is the tip of the iceberg on possibilities but if many users can not even access these shares or use the ChatGPT GPT, that relays to me that now is not the time to move forward, think MCP. The holdup with MCP, from talking with others just today, is waiting for OAuth to be required.
HTH
How should I read this? Do you claim that the SWI-Prolog assistent has no value over just using an LLM or that an LLM assistent has no value?
As for the first, the value is only that the assistent is more focussed to the task and specifically targets SWI-Prolog (most of the time) without using an lengthy prompt to achieve the same. As for the second, It depends on your knowledge. It is not very likely to help me finding the right Prolog predicate(s) to do a task, but it is likely to be able to help less experienced users to find the right libraries and predicates to solve a more complicated task. In my (limited) experience using this for C, it is typically capable of finding a set of functions that have to be combined to realise some task I asked it to realise. It often gets the details wrong and sometimes it is completely wrong. Quite often it comes with a good starting point though.
Experience with the SWI-Prolog assistent are similar. It is not very good in writing complex Prolog code unless it is a more or less standard problem as used in exercises. It is quite good at tasks such as reading CSV, making a network connection, etc.
That is unfortunate, but unavoidable. We can setup a service where that is not needed, but that would be far above our budget. Hopefully that will improve in the future. As long as that is not the case, I still think it is valuable to explore tools that can boost productivity.
One quick option worth considering is to create several different prompts tailored for even more specific tasks:
- Create test cases for this SWI-Prolog code.
- Comment the SWI-Prolog code.
- Convert this Python code to SWI-Prolog.
- Create a GitHub pull request based on the changes to the code.
- Create documentation for the SWI-Prolog predicate in the style of PLDoc.
- Create SWI-Prolog DCGs to parse this BNF.
etc.
While it might seem obvious to embed each prompt in a different assistant, this approach could be problematic because switching assistants often starts a new conversation. Therefore, any code created in one conversation would not be present in the next assistant.
A better approach would be to store all such prompts with the SWI-Prolog GitHub account and add them manually to an LLM conversation. Alternatively, having these prompts available as templates would allow you to tailor them as needed.
Although I have not created an MCP client or server, I hope that with MCP, many such problems will be minimized.