Http_post and OpenAI/ChatGPT API calls

I note that there are similar topics here, but this is a bit different.

First, I’m trying to make API calls using the http_client and http_json libraries. I can make some queries, but not all. I believe the JSON format needs to be different for the different models, and I cannot get gpt-3.5-turbo to work for me in Prolog.

I can make the queries using curl, and will include the curl command for reference in a bit. I thought it would be simple enough to convert the query to Prolog, but I think there are tricks for formatting JSON which are preventing me. Anyhow, I know the correct query syntax, and the API key and endpoint are fine.

There is a project on Github for ChatGPT integration with SWI Prolog. It does not work with all models, in particular gpt-3.5-turbo. Again, I believe that the JSON syntax is different for the newer model. I just get a “BAD REQUEST” when I try.

Here is the curl command:

curl -X POST '' -H 'Authorization: Bearer <API-KEY>' -H 'Content-Type: application/json' \
-d '{
    "model": "gpt-3.5-turbo",
    "messages": [
        {"role": "user", "content": "What is your favorite OpenAI model?"}
    "max_tokens": 50

I hesitate to show my attempts so far, but will do so. Please realize that I have been flailing at different attempts, and have tried all sorts of permutations. A vague return value like “BAD REQUEST” doesn’t give me much information to go on.

    chat_test(Response) :-
            EndPoint = '',
            APIKey = '<API-KEY>',
            Model = 'gpt-3.5-turbo',
            atom_json_term(Messages, json([{role='user',content=Prompt}]), []),
            Prompt= 'What is your favorite programming language?',
            atom_json_term(Data, json([model=Model, temperature=0.7, 
                   max_tokens=50, messages=Messages]), []),
            http_post(EndPoint, Data, Response, [authorization(bearer(APIKey)), application/json]).

The one I was thinking of was this:

It looks pretty good, but fails for certain models. My best guess is that the JSON format is different for the newer models.

For example:

   Data = atom(application/json,D),
   (  Raw=false
   -> gpt_extract_data(choices,text,ReturnData,Result)
   ;  Result= ReturnData

I can make the same http_post call and get the same results.  But nothing that I've tried has successfully formatted the JSON for the newer engines.  I gave the curl example of a successful use of a new engine.

PS:  I didn't know that posting a link would embed the code.  Thankfully it is inside blocks, but I was intending only on sharing the link.

I don’t think this is an API issue.

I can change JUST the model, and the query runs. So “davinci” works, but “gpt-3.5-turbo” does not. That tells me that the issue is with the JSON formatting.

The endpoint (URL) is correct, or I wouldn’t be able to make any queries. And there is a different response for BAD URL and BAD REQUEST. A point which I was oblivious to for too long…

I wish there was a way to compare what the http_client library generates and what curl generates. Well I guess I could install a packet filter and capture the actual network traffic, but I was hoping to run across a JSON guru who would look at my code and say that it was the stupidest attempt at formatting JSON that they’ve ever seen, and “of course” it wouldn’t run.

I really want to get the http libraries to work for me, as too many layers start making me worry about things. I’m already coding in Logtalk, and using the SWI Prolog backend.

In the end, I’ll do what I need to do. But it comes down to believing that the http libraries can work, and that I just don’t know how to use them properly.

If curl can format JSON properly, then I expect the http_json library can do the same (if used correctly). That is a skill worth learning.

Have you seen this post here?

Wow, that looks like what I need to do! Thanks! I’ve searched the forums, but I recall skipping over that message because I was not planning on using a proxy.