I just used the builtin json_to_prolog/2 since that is what is used by the http package already. You’re right that I should have pointed at that in the docs. https://github.com/EricZinda/swiplserver/issues/11
I believe using the same encoding there is goodness for the platform. If we want to fix that an add an optional different encoding it seems like both could benefit. Don’t think I want to take this one on, but would use it if someone did .
The ironic thing is that this format is actually what I use internally in my big use of swiplserver at the moment. So I agree with you it is nicer. I just wanted to keep it consistent with the rest of the platform. I convert with this function before i use the JSON:
def ConvertStdJson(self, value):
if isinstance(value, dict):
return {value["functor"]: self.ConvertStdJson(value["args"])}
elif isinstance(value, list):
return [self.ConvertStdJson(x) for x in value]
else:
return value
Passing string length along with the command lets us get around the issue that the Prolog read_term/3 will hang waiting for more characters if the caller sends a Prolog term that is invalid (in that it isn’t finished). I.e. something like my_atom(foo, bar. Using read_term_from_atom/3 will throw. That way, we can report on invalid Prolog without doing something like a timeout. Just seemed cleaner.
My intention here was to have a tiny number of commands to keep the overhead of writing a new language library very low. So I’m hoping to not have to wish that it were easier to add more
I did debate making the input JSON instead of a string, though, just for consistency. In my usage of the system strings seemed more natural from the Python side, but that just my usage. I’ll add an issue to the list and see what other folks think as they use it.
BTW: Let’s do any further discussion using the issues at Gihub since I fear this thread will become unreadable…
Thought 1: I strongly suspect that new people building a system with Prolog will start by doing REPL using the toplevel a bunch until they get their Prolog code into the approximate right form. Otherwise, it will be just too painful and slow to get things right. Once in the rightish form, they’ll switch to the Python side. So, the queries they are running will mostly be things they’ve already tried out.
Thought 2: I also suspect that it is rare to want to run a query and only use the first result (otherwise you’d use once/1). If my first thought is right, then you’ll normally want all the answers by default, unless you want to stream the answers one by one. And for that case you’ll use the async version.
The challenge here is that several things can get returned: true([with answers]), false, or an exception(). It seemed to me that keeping a consistent model where the result on the Prolog side is a single term that gets converted into Prolog using the standard serialization simplified understanding of the protocol and made handling of the responses on the language library side easier.
It also doesn’t feel too onerous on the Python side, for example. Here’s my library code that converts the true() answers into a form that is more like what I think you’re suggesting.
answerList = []
for answer in prolog_args(jsonResult)[0]:
if len(answer) == 0:
answerList.append(True)
else:
answerDict = {}
for answerAssignment in answer:
# These will all be =(Variable, Term) terms
answerDict[prolog_args(answerAssignment)[0]] = prolog_args(
answerAssignment
)[1]
answerList.append(answerDict)
So, if you yourself use that format because it’s more convenient, why is the protocol sending a format that’s less convenient? If your goal is to make it easy to write new language glue libraries, shouldn’t the protocol be as close to the intended usage as possible?
Remember, no one outside the Prolog community has used the builtin Prolog term conversion functions (or, most likely, any of the rest of it), so I feel like it’d make more sense to base your communications protocol around what is standard for the rest of the tech world, not around what is standard for Prolog.
I guess I’d argue that run_async is async, from a “the client is not synchronously waiting for a result” point of view, but I see your point.
My thinking here was to keep the interface as straightforward as possible. Each connection represents a thread that can run one query at a time. The async API on a thread allows you to retrieve answers without waiting (i.e. asynchronously) as they are available. If you want to run queries concurrently, you can create a new connection (i.e. thread) and run them concurrently.
I see the benefit, though, of having a way to fire off a bunch of queries on one thread and checking back to see when they are done. Certainly the library writer, or the developer, could each do this at their layer, given what is provided already.
OK, let me think about this a bit, I do agree it is going to be something people will want. One way or another.
That’s exactly why I suggest wrapping the input in JSON. You can’t depend on people knowing the right format of a Prolog term, but everyone knows (by which I mean there are libraries available for basically every language) how to read and write JSON. That way you’re never in doubt as to what the intended input is.
You may be underestimating how intimidating the Prolog REPL is to newcomers My expectation would be that, if this project does take off and more people start using it, most people’s introduction will probably be in “hey, there’s this really cool package that lets you add programmable AI to your project, you just have to put this one library in there and make these method calls and it’ll just work”.
Really, it points to what the target audience is, and only you can answer that. If the target audience is “Prolog developers who want to be able to work in another language”, then it makes sense that you should do things with Prolog syntax and expect people to use Prolog constructs like once/1. It sounded to me, though, like you want your audience to be “Application developers who don’t know Prolog but find it interesting and want to try it out in conjunction with something they’re familiar with,” so avoiding penalizing newbie missteps like “didn’t put a period on the end of a term” or “didn’t realize that the non-determinacy of this predicate was a problem” will make things friendlier for new Prolog developers.
I think much will depend on the IDE in use – how well it supports debugging in an integrated way.
Otherwise, from an engineering perspective a developer will probably will want to work in a Prolog language development environment with a proxy / surrogate client in Prolog and related test cases to get the AI working before writing the python application code.
Otherwise, there would be too many moving parts that can go wrong, to deal with concurrently.