This is a logic question, perhaps, as opposed to a logic programming question.
I’m probably misunderstanding something, or forgetting something I already learned, so your corrections are appreciated.
Satisfiability I understand to mean that it is not possible to derive a contradiction from a program.
Models have to do with the facts under which a program is satisfiable, but I don’t want to talk about models, so I’m going to invent a different term, “entailed program.”
An entailed program is a set of facts AND rules that satisfy a program, including optionally facts AND rules that can be derived from the program. So each program, if it is satisfiable, would be an entailed program of itself.
If you had the program:
p(X) :- q(X), r(X).
q(X) :- t(X).
t(1).
Then an entailed program of that program might be:
p(X) :- q(X), r(X).
q(X) :- t(X).
t(1).
q(1).
p(1) :- r(1).
That last line is what I’m interested in. It is a new rule, which is more specific than the existing rule, but reaches the same conclusions, it may be (in a less simple example) considerably less computationally expensive than the original rule. And it is a clear expression of what the user may be trying to discover, which might be “under what circumstances does p(1) hold.”
The closest thing I have seen to this is abducibility under s(CASP) where you can add statements to the program that mean either r(1) or not, and have it find models. But there, you need to keep track of what statements you made abducible, and why you made them abducible, and derive from the models you get back that p(1) if r(1).
But if we got entailed programs instead of models, that conditional conclusion would just be stated explicitly right there in the program.
Now very few rules could result in an enormous number of entailed programs. But it seems to me that if you told the system that what you want to know is the rules about p(1), backward chaining would allow you to avoid generating rules that don’t matter.
It would also have the effect that your inputs and your outputs would both be programs, so you could add another fact, run the entailment again, and use the resulting programs to efficiently determine, for example, what inputs remain relevant for deriving a particular answer.
So my question is whether or not this is a thing, and if not why not.
Thanks in advance.