# Why do models not include derived rules?

This is a logic question, perhaps, as opposed to a logic programming question.

I’m probably misunderstanding something, or forgetting something I already learned, so your corrections are appreciated.

Satisfiability I understand to mean that it is not possible to derive a contradiction from a program.

Models have to do with the facts under which a program is satisfiable, but I don’t want to talk about models, so I’m going to invent a different term, “entailed program.”

An entailed program is a set of facts AND rules that satisfy a program, including optionally facts AND rules that can be derived from the program. So each program, if it is satisfiable, would be an entailed program of itself.

``````p(X) :- q(X), r(X).
q(X) :- t(X).
t(1).
``````

Then an entailed program of that program might be:

``````p(X) :- q(X), r(X).
q(X) :- t(X).
t(1).
q(1).
p(1) :- r(1).
``````

That last line is what I’m interested in. It is a new rule, which is more specific than the existing rule, but reaches the same conclusions, it may be (in a less simple example) considerably less computationally expensive than the original rule. And it is a clear expression of what the user may be trying to discover, which might be “under what circumstances does p(1) hold.”

The closest thing I have seen to this is abducibility under s(CASP) where you can add statements to the program that mean either r(1) or not, and have it find models. But there, you need to keep track of what statements you made abducible, and why you made them abducible, and derive from the models you get back that p(1) if r(1).

But if we got entailed programs instead of models, that conditional conclusion would just be stated explicitly right there in the program.

Now very few rules could result in an enormous number of entailed programs. But it seems to me that if you told the system that what you want to know is the rules about p(1), backward chaining would allow you to avoid generating rules that don’t matter.

It would also have the effect that your inputs and your outputs would both be programs, so you could add another fact, run the entailment again, and use the resulting programs to efficiently determine, for example, what inputs remain relevant for deriving a particular answer.

So my question is whether or not this is a thing, and if not why not.

1 Like

It’s normally: is there a combination of possibilities which satisfies all the constraints presented.

Is this question about s(CASP), or general Prolog?

Hi Jason,

I’m glad to see that you’re still in the fight I hope we can see you at the GDE’24 workshop in October (GDE@ICLP-2024).
I have many doubts about what you raise (I’m not sure I understand the goal), but I’ll try to offer some related ideas (I hope some of them are relevant and help answer your question).
Regarding @brebs’ question, it is important to differentiate between ASP semantics and Prolog:

• Prolog computes the least fix point (a single model) and does not support unstratified negation.
• ASP computes multiple models (because the presence of unstratified negation allows, among other things, to model abducibility).

The concept of an entailed program is interesting (the work we have presented on forgetting: https://hdl.handle.net/11705/PROLE/2024/13 may be related) but,
while under Prolog semantics it may not present problems, in ASP when we have negation everything gets complicated and it is not so easy to remove and/or add rules or facts.

Another related aspect could be partial evaluation… (Partial evaluation in logic programming - ScienceDirect)

I’m trying to understand your entailed program vs. model.

When you say an entailed program generates new rules, do you mean that the instantiation of provided arguments in the original program are a different structure than these new rules?

It seems like the entailed program is the `[trace]` function entered as `?- trace.`

This illustrates the reasoning and instantiation of variables you provide in :

``````p(X) :- q(X), r(X).
q(X) :- t(X).
t(1).
q(1).
p(1) :- r(1).
``````

Is `trace` an entailed program, or different somehow?

It is a thing, but if you’re deriving a new rule rather than a fact (a positive atom) then you’re doing induction, as in Inductive Logic Programming (ILP). Ordinary Logic Programming is deductive, and I guess also abductive, in the sense that you can derive new goals (negative atoms) during an SLD-Refutation proof, in Prolog, or however it is done in ASP (I’m not sure).

Here’s how I understand deduction, abduction, and induction in the context of logic programming:

• Deduction means deriving new facts from a set of facts and rules where the new facts are instances of the literals in the heads of rules.
• Abduction means deriving new facts from a set of facts and rules where the new facts are instances of literals in the bodies of rules.
• Induction means deriving new rules from a set of facts and rules.

So deduction is what you get when you compute the Least Herbrand Model (as in Prolog) or the Stable Model (as in ASP) of a logic program (a set of facts and rules). Abduction is what you get in Prolog when you derive intermediary goals during an SLD-Refuation proof; in your example, if I squint a bit, it looks like you’d derive `¬q(1)` and `¬r(1)` during a proof if your goal was `¬p(1)` or `¬p(X)`. Induction is what you get if you used some ILP system to learn `p(1) :- r(1).` from your original program.

As @brebs and @Xuaco suggest you have to be clear about whether you’re trying to prove a goal, i.e. a set of negative literals, by SLD-Refutation as in Prolog, or trying to derive a stable model, as in ASP. In your example above, you don’t seem to have a goal so that’s probably ASP.

If you’re interested in learning ASP’s, have a look at ILASP: Inductive Learning of ASP’s:

https://www.ilasp.com/

And if you’re intersted in bog standard Prolog-based ILP then have a look at my stuff:

(well I mean I have to plug my work right?)

I pointed Gopal Gupta at this question. Below is his response.

By Gopal Gupta:

What the person who wrote the question is asking can be achieved by partial evaluation. Basically, in partial evaluation one algebraically simplifies the program until it cannot be further simplified.

So given the program,

``````p(X) :- q(X), r(X).
q(X) :- t(X).
t(1).
``````

We could simplify the clause for p(X) by simplifying its body. One can see how one will get

p(1) :- r(1).

simply by rewriting q(X) using standard Prolog operational semantics. The fact that the definition of r(X) is not given will cause the partial evaluation process to stop, resulting in the residual rule for p/1: p(1) :- r(1).

One could perform this partial evaluation using s(CASP) operational semantics, if one wished (slightly more complicated due to presence of constraints and even loops).

There are partial evaluators for Prolog. The one from Michael Leuschel called ECCE is probably the most recent. Mixtus, based on SICStus, was also a good one, but may not be available anymore. I have used Mixtus for showing how an interpreter for an imperative language specialized wrt a program simplifies to something that is similar to the assembly code for that program (demonstration of the First Futamura projection). This was 25+ years ago. If interested, the paper is here: https://utdallas.edu/~gupta/lnai.pdf (see section 3).

1 Like

Thanks, everyone. I’m not familiar with partial evaluation or inductive logic programming, so I have a lot to look into, there. Maybe when I understand better I can ask better questions.

@Xuaco My new day job has me working on other things, recently, but I’m still in the fight on a hobby basis. It’s just that a lot of my work on Blawx has been on UI and integrations, not the reasoning. And with so little time to spend on it, it will be that way for a while.

Government of Canada recently published a case report to the OECD OPSI blog, talking about how they used Blawx (and thereby s(CASP)) for a legislative drafting project, so make sure to add that to your list of real-world deployments.

I don’t know if I will be joining for the workshop, yet, but thanks for the reminder!

1 Like