s(CASP) meets JSON

[cross post from post to GDE-21 attendants]

Hi all,

In a smaller circle we have been discussing the need for JSON output.
Ideally we’d base that on something established. Clingo has JSON output,
which seems close. Unfortunately it simply emits the model as a list of
strings, e.g. “sleeps(bob)”. That works reasonably as long as your model
is ground and doesn’t contain function symbols. It doesn’t work that
nice for s(CASP). In addition, we also have a justification. Joaquin
asked around in the CLP community, but there too was little concrete
input.

So, I decided to give it a try. One starting point was the work on
Pengines, the underlying protocol used by the online version of
SWI-Prolog (SWISH). The term representation thereof has also been used
by MQI, another contributed project providing a Prolog JSON based API.

The SWI-Prolog version of s(CASP) now allows for (use --json=- for stdout)

scasp --json=file input.pl

If you do not want to install, there is a web service at
s(CASP) web server which you can also use as a service using e.g.

curl --data-binary @test/programs/birds.pl -H "Content-Type: text/x-scasp" -X POST https://dev.swi-prolog.org/scasp/solve

See s(CASP) web server -- help for some more details on the
API. Feel free to hammer it. The server is a 32-core rather old and slow
cloud (virtual) machine that doesn’t do anything critical.

I have not yet made a description of the JSON format. It should be fairly
self-explanatory. Could anyone interested in this have a look and give
feedback so we can settle this?

At the moment it should be able to deal with all s(CASP) terms except for
partial lists.

Regards --- Jan

p.s. For local installation

- Install SWI-Prolog 8.5.1
- git clone https://github.com/JanWielemaker/sCASP.git
    - Build scasp using `make`
- Or start the web server as above using (default port 8080)

    swipl examples/dyncall/http.pl [-p port]
1 Like

Just took a look at the JSON output. It doesn’t seem like it has the ability to include the --human or --tree flags, and include the results in the output. Both would be valuable to have in the JSON package returned. Also, I prefer if models are grouped by whether they have the same bindings or not. If they do, in my context, they constitute different “reasons” for the same “answer”. Needing to parse through the JSON and combine answers with the same bindings is an extra step I’d like to avoid. But, for other purposes that may be totally inappropriate, I don’t know.

Thanks for having a look. --tree should work. At least, this works for me:

  scasp --json=- --tree test/programs/family.pl

Human indeed not. It was my perception that we want JSON to talk to machines. What is your use case? We could of course return the sentences in a nested JSON list.

That is something we are discussing in the project for which s(CASP) is being ported. Technically we are in the position that we will use Prolog to do the input preprocessing and output post-processing. That is a lot easier :slight_smile: We are not yet sure how we must combine the models from different answers. If you have experience, I’m sure we’d like to have a chat about that. Possibly we can define one or more post processing steps and have them executed in the solver before returning the combined result?

One of the things I’m working on is to simplify the model. First of all by removing model terms that are entailed by other model terms. I’m also wondering whether we should keep not(p) if we also have -p and similarly remove not(-p) if we have p. That is not entailment. One version could be regarded as stronger than the other, so what is the value of the weaker claim?

The use case is a user-facing app that is explaining the decision to the user using the --human --tree output, along with some post-processing to remove global constraints and abductives.

That’s a reasonable way to proceed. As I say, I’m not sure grouping by bindings makes sense for all use cases, it just makes sense for mine. Not sure I would have much to offer the conversation beyond that.

There is a lot of stuff that you can do to post-process the models, and more specifically, the justifications, to reduce the complexity and display the results to the user in a more coherent way. Particularly multiple justifications for the same bindings. I haven’t DONE any of that stuff, yet, but I sort of have it in my head as a future step.

I don’t know what your use case is, but in terms of the models, what I would like to be able to see is a version of the model that includes information on which statements in the model were abduced, which were provided as facts, and which were derived. That way, I could check to see whether or not I have a concrete answer to the user’s question despite not yet having collected all the relevant pieces of information (if there are models with no abduced elements), or I can use the abduced elements as a hint to the app about what questions it might make sense to ask next. But that would require making the models noisier, not simpler.

I’m not sure what you want to do with the models that would call for simplifying in the way you specify. I’m skeptical whether that would be helpful.

On the negation issue, it’s impossible for s(CASP) to know whether not p or -p is the piece of information that the user is interested in.

For example, in legal applications, not guilty means one that the prosecutor failed to meet the standard of proof required, but -guilty means the person was exonerated, which is a different thing. If you remove not guilty from the model when -guilty holds, then I have to search for two different things in order to find all the models in which the person was not convicted. If you leave all the information in, then I can just search the models for not guilty and I will get everything.

I think there are probably analogous reasons to avoid simplifying the models to remove things that are entailed. So from my perspective the models have too little information, not too much. But feel free to convince me otherwise.

Sorry, let me be more clear. I’m anticipating a “Rules as Code” scenario in which a ruleset that is relevant to more than one application is encoded and made available as a Web API. Each application, rather than re-implementing the ruleset, sends data to the Web API and displays the results. Then, when the rules change, you can change the ruleset in one place, and all the relevant apps don’t need to be modified as long as the relevant data structures haven’t changed.

In that use case, if the application wants to display a natural language justification for the answer it gets from the Web API, it needs to be included in the JSON response.

I really agree with this. More than one time the model has shown me a logical conclusion that I never considered. It would be a pity to lose that.

1 Like

Ok. Alternatively you could of course use the HTML output. That provides both the human and formal model and justification and CSS classes that allow you to pinpoint exactly what is what. It might be a good idea to review the details of this format, fix and document them?

If we go the JSON route, I guess the model becomes and array of strings and the justification a nested object of strings (tree). We could also attach the human description as a property to the formal representation as it is used now. Opinions?

Combining models seems a logical step for some cases. I’m not sure the bindings are always the right thing here. Doesn’t that assume the Prolog way of thinking, where an answer is a set of bindings. Here the core of the answer seems to be the model, no? Some terms in the model have special meaning to the application as we can indicate using #show ...

Thanks for these remarks. It seems interpretation of the positive, not negative, not and negative terms is rather domain specific. From a logic point of view, there is only so much one can do and I guess we can define options for that. Roughly, we can remove entailed model terms, e.g. given p(X | {X \= a}) and p(b) the latter is surely entailed by the first. Second we can remove weaker claims.

As for abducible, the raw model contains abducible(X) terms. They are now omitted the same was as the Ciao version does. Maybe that is not a good idea? It the unfiltered model there is also proved(X). This seems irrelevant. Just means X was derived twice, once the hard way and the second time because it was already done. There is also chs(X) which means it is in the model because it was assumed before and the result is a consistent model. Note that you can get all this information from the justification though. It is not entirely clear why you want to use the model for that.

I’d rather the reasoner just gave me data, and let me decide whether and how to format it. It increases the re-usability of the output, so the same knowledge representation can be used for a variety of user-facing tools.

Agreed. It strikes me that there are likely situations in which the “human” description is useful for some things and problematic for others, because it can’t be as reliably parsed. But I also understand there is a performance hit for the human versions. Best case scenario might be to allow you to choose either or both, instead of the current method which allows you to choose only one.

I’m sort of out of my depth, here, but what I will say is that in Prolog, you get bindings, in ASP, you get a model, and in goal-directed ASP, you get bindings and a minimal model, and a justification. So I would say that the “answer” in s(CASP) is the combination of the three, not primarily the model. But again, I don’t know if grouping by bindings makes sense outside of my domain.

We can, but I don’t see why we would want to. I estimate the chances that the actual models, or even the human versions, are going to be displayed to users without post-processing to be approximately zero. The model being too big, and including things that you don’t need, is not an actual pain point for anyone, I don’t think. It is not “having weaker claims included” that is the problem. It is “not being able to clearly identify the weaker claims” that is an issue. Also, there is a semantics associated with what is and is not included, because it is “minimal”. If you start removing things from the model, you are returning a sub-minimal model, which means interpreting the model now depends on knowing what flags were included in the query. That seems sub-optimal. I would say give the user options to add information about the elements in the model, not remove them.

I don’t like the way that abduction is included in the justifications. I have in the past needed to trim out the redundant parts to make the justifications more usable. So I would advocate for removing things from the justifications and adding them to the models.

Here’s a use case for why I would like the abducted elements and the elements provided as facts included in the model:

Let’s say I have an expert system application, and I’m using s(CASP) to determine what inputs might be relevant to the user’s query. In order to tell the expert system what inputs might be relevant, I set the list of all inputs to abducible, run the query, and get a set of models. The abducible elements included in the resulting models are the questions that are relevant to the query. The ones that aren’t are clearly never relevant. I can send the list of “ever relevant” inputs to the app, and the app can decide what order to ask them in, ask one, add that fact, remove the abducibility over that input (if the user had an answer), and repeat the query.

Having to search the justification trees of all of the returned models in order to get the list of ever relevant inputs is needlessly painful. Just let me do a union of all of the models, and then take out the “abduced” elements. Whether the information is in the model, or annotated somehow, is neither here nor there. The point is that having to dig through multiple justification trees to find them all is silly.

The application is going to need to take into account the difference between things that are abducible because they haven’t been put to the user yet, and the things that are abducible because they were put to the user and the user answered “I don’t know.” But I think that’s a problem to solve above the level of s(CASP). What constitutes an “input” is also something that would happen at a higher level, I think. But that’s the case for adding abduced info to the responses, somewhere, not in the justifications.

No, not at all. If all of the relevant inputs are made abducible, then the union of the abduced elements in all of the models returned is every input that could potentially be used to reach the conclusion. If an input isn’t on that list, it’s entirely irrelevant to the question, and shouldn’t be asked.

Yes, precisely. A model in s(CASP) is a combination of an answer, and a reason. In legal applications, it’s valuable to know not just whether a query is true, but why, and all the reasons why. Particularly if you are trying to model the behaviour of a piece of legislation. If, for example, p was intended to be true only when q was true. You set all the inputs to abducible, and query p. You get two models instead of one, and the second one doesn’t include q. Now you know that your legislation is permitting p to be concluded at times it wasn’t supposed to. So the multiple models are critical to understand what the law is actually doing. I’m looking at tools aimed at those sorts of analytical tasks.

I’m familiar with expert systems in Prolog, generally. Even tried writing one a couple of years ago, but didn’t get very far. The ability to translate “yes/no/I don’t know” to “p/-p/#abducible p” is something I don’t know how to duplicate in Prolog. And inside s(CASP), I don’t have access to read() and write(), etc.

I am also pessimistic about unmitigated backward chaining as a method of choosing question order. Particularly when clause order is relevant to the code’s behaviour.

If you’re aware of an approach to collecting information from the user expert-system-style that is a) powered off of an s(CASP) encoding of the law, b) allows sophisticated control over question order without requiring changes to the knowledge representation of the law, and c) allows you add and remove facts and abducibility statements from s(CASP) code as required to represent the user’s answers, I would be interested in seeing it.

The approach I have used in the past is to take the s(CASP) encoding, parse it, and generate a dependency network. Then take the query, find the node in the network for the query predicate, and deem all of the nodes descending from the query node as “relevant”. (All of this takes into account the defeasibility method I’m using.) Then take a LExSIS file that describes a data structure (and implicit question order) for collecting information from the user, and how to translate that information into s(CASP) code. Use all of that to do YAML and Python code generation for a Docassemble interview that will collect the relevant information in that order, and translate it into s(CASP) statements, combine the legislation and the newly generated s(CASP) statements into a new s(CASP) program, send that program to s(CASP) and display the result to the user.

Examples of the input s(CASP) and LExSIS files are here:
Encoding of the rule in s(CASP): docassemble-l4/r34.pl at main · smucclaw/docassemble-l4 (github.com)
Encoding of the Data Structure in LExSIS: docassemble-l4/r34.yml at main · smucclaw/docassemble-l4 (github.com)

A video of what the end result looks like is here: https://www.youtube.com/watch?v=NEjrV4Wwyh8

If there is an easy way to duplicate that in Prolog, whether with a text or web-based front end, let me know. :slight_smile:

1 Like

This might be of interest to you: https://www.executable-english.com/ … the “demo” isn’t working properly for me, but there are some canned examples and you can see how the rules are created and how their explanation is done (e.g., the classic British Nationality Act. I don’t think that this is “abduction”; more like datalog with some flavour of negation.

Thanks for that. Haven’t seen it before. It reminds me of Logical English, which is something that Bob Kowalski and his company have been talking about, recently.

I’m generally pretty skeptical of tools that force you to draft legislation in something approximating natural language. My experience with tools in that genre, like Oracle Intelligent Advisor, is that the the result is relatively easy to read, but surprisingly difficult to write. My experience is that the similarity to natural language makes the semantics opaque, and I end up violating it constantly without knowing that I’m doing it or why.

My preferred approach is the one in Blawx (unsurprising), where you can use “phrases” to write the code, but you aren’t expected to type them, instead you drag and drop them from a list of valid options.

s(CASP), for the record, provides a similar capability to allow you to translate your s(CASP) encoding to natural language, with the --human --code compiler options. I don’t know if the SWI-Prolog implementation has a similar capability.

There’s also Attempto Project - “Controlled Natural Language”, which is (I think) implemented in Prolog.

But I think that “executable english” is more constrained than that, and is more like pre-defined phrases with slots in them; and it translates quite directly to a form of extended Datalog. Also, the British Nationality Act was an early attempt to model existing legislation with logic: http://www.doc.ic.ac.uk/~rak/papers/British%20Nationality%20Act.pdf
I think that “executable english” takes that concept and works it into an existing framework that provides explanations for what it computes. (There are some earlier papers on this, which you can find if you search for “Syllog” and Adrian Walker.)

1 Like

The top behaviour is what I would expect. The bottom behaviour feels like a bug. Maybe @jan can help?


Update:
If you replace :- abducible p. with p :- not -p. -p :- not p., it behaves as expected, so it does seem to be a problem with whether or not Prolog detects the presence of the -p predicate when it is generated by the abducible statement.


I don’t understand the difference between these two. Can you give me a more concrete example?

Sorry, I’m confused. I don’t see where your example is doing hypothetical reasoning. It’s not generating cases, the cases have been laid out by the user. One is p and the other is ~p. It is solving for two cases. That they are “hypothetical” is just in the mind of the user. I think that might be what you mean when you say

I also still don’t understand what the difference is between “don’t know, don’t care” hypothetical reasoning, and “don’t know, guess” hypothetical reasoning. I was hoping you could give me an example of the difference between the two.

Thank you for that, I think I understand, now.

I highly doubt s(CASP) is capable of supporting a hypothetical assumption operator, because it is defined procedurally.

The way you do “don’t know, don’t care” hypothetical reasoning in s(CASP) is by running two queries. If ?- q. returns models, and ?- not q. doesn’t, then q is certain, regardless of the unspecified facts. You don’t know those facts, and it doesn’t matter.

And the queries return a lot more useful information, and they remain the same regardless of the number of propositions. If you wanted to do the equivalent of #abducible a,b,c,d. using the “hypothetical assumption operator”, you would need to write an incredibly long query.

“Hypothetical assumption operator” is a very generous term for it. I would call it a “temporary fact operator”, or something along those lines. It requires the user to hypothesize the fact scenarios, it merely provides a way to consider each fact scenario separately in one query. That’s useful, but by no means is it automating hypothetical reasoning.

I’m afraid I am not well grounded in the logical theoretical underpinnings of the different approaches. But perhaps this is helpful: When discussing uncertainty in s(CASP) with the authors of the Ciao version, they advised me that in answer set programming there are 5 levels of certainty about a proposition in a model, not 2 or 3.

  1. Known True: p.
  2. Possibly True: not -p.
  3. Unknown: not p, not -p.
  4. Possibly False: not p.
  5. Known False: -p.

I have seen #abducible p. in s(CASP) described as imposing the law of the excluded middle on the proposition p. It is expanded by s(CASP) to

p :- not -p.
-p :- not p.

The first line excludes “possibly true,” and “unknown.” The second line excludes “possibly false,” and “unknown.” So it reduces the number of possible states for p to 2.

If you are interested, I think there are videos and papers from the First Workshop on Goal Directed Execution of Answer Set Programs, which was held alongside the last International Conference on Logic Programming. You could also reach out to the authors of the Ciao version, who are well versed in the logical implications.

I’m sorry I don’t have a link handy for the proceedings, but this paper was one of the ones presented: A Short Tutorial on s(CASP), a Goal-directed Execution of Constraint Answer Set Programs (ceur-ws.org).

This, precisely. s(CASP) is for processing answer set programs in a goal-directed way. An s(CASP) response is a list of tuples of a dictionary (bindings), a set (model), and a tree (justification). Given the existence of s(CASP) queries and responses, ephemerality (“don’t know, don’t care” hypothetical reasoning) is reduced to determining whether one list has elements and another doesn’t. Why expect s(CASP) to do that? Any tool can do that. Prolog certainly can, given that it is already running s(CASP) queries and post-processing the results to display them in SWISH.

Creating a Prolog predicate that succeeds when an s(CASP) query returns any models, and its negation returns none should be trivial.

It seems trivial regardless.

Presume there is a Prolog predicate scasp_models(Q,N) that expects to be given a query and binds N to the number of models returned by an scasp program defined in a different file. Presume also that there is a predicate that takes an s(CASP) query and binds to its negation, scasp_negation(Q,NQ).

ephemeral(Q) :-
  scasp_models(Q,N),
  scasp_negation(Q,NQ),
  scasp_models(NQ,M),
  N > 0,
  M = 0.

Now your query is ?- ephemeral('q').

How is that not enough?

That’s how you do it with hypothetical entailment. I’m not saying that my ephemeral/1 predicate above is how you would implement hypothetical entailment. I’m saying that it shows how you could proceed without hypothetical entailment, and get the answer to your query, which is “does q always hold regardless of the truth value of the unknown propositions.”

I don’t see why it needs to be a single system, so that’s not my goal. That said, if you succeed in adding that feature to a system, let me know. I would be curious to see what else becomes possible when the program is capable of reflecting on its own outputs.

1 Like