Building a proof tree / collecting clauses in a refutation sequence

I would like to collect all the clauses included in a successful refutation sequence of a goal. In more imperative terms, I’d like to collect a list of the clauses that were “called” during execution of a query. This is also known as building a proof tree.

I know I can do this with a meta-interpreter, in fact I think I’ve done it a few times already. I would like however to take advantage of Swi-Prolog’s internal machinery, most notably tabling, but also anything that would have to be coded from scratch in a meta-interpreter.

Is there a way to do what I want in Swi without a meta-interpreter?

I was recently reading this older thread:

https://swi-prolog.discourse.group/t/how-to-set-a-global-value-during-execution-of-a-predicate

Where Jan Wielemaker suggests using prolog_current_frame/1 and prolog_frame_attribute/3 with a bit of a hack in front, and I thought that I could somehow use this to keep track of the clause indices or clause references of the clauses in a refutation sequence- but I couldn’t find out how to make it work.

Is there something else that is possible to do?

EDIT: After the conversation below, I realised I would have saved us all a lot of time if I had given an example. I redress this below.

Consider the following program:

odd(A):-s(A,B),even(B) % Clause 1
                                  
s(s(X),X).
                                  
even(0).
even(s(X)):-
       odd(X).

I would like to write a predicate rs/2 to collect all the successful refutation sequences that included clauses of odd/1 (and only keep the clauses of odd/1 in those sequences). For example, suppose I called rs/2 as shown below:

?- G = odd(s(s(s(0)))), rs(G, Ss).

Then rs/2 should respond as follows:

Ss = [1] ;
false.

Where “1” is the clause index of Clause 1 in the odd/1 definition. Or in other words, it’s the sequence of clauses “visited” during a successful refutation, by resolution, of the goal G = odd(s(s(s(0)))).

2 Likes

I once saw this code posted here in the forum:

prolog_trace_interception(Port, Frame, _PC, continue) :-
    prolog_frame_attribute(Frame, goal, Goal),
    prolog_frame_attribute(Frame, level, Level),
    recordz(trace, trace(Port, Level, Goal)).

:- meta_predicate record_trace(?).
record_trace(Goal):- trace, Goal, notrace.

pop_trace_recording(Value):-
    recorded(trace, Value, Reference), erase(Reference).
1 Like

I don’t know if this is what you seek but better to have seen it and judge for yourself than to not have seen it. :slightly_smiling_face:

Logic Programming for Theory Representation and Scientific Inference in Psychology

1 Like

Thanks for the mention @EricGT! @stassa.p, if you are interested in the meta-interpreter in this project it is here (on line 78-84). Some output examples can be found here, under section 6.3.

1 Like

Thanks for posting that. If I understand correctly it turns on the tracer so it would basically need user interaction, no? That won’t work for where I want to use it.

Anyway I’ll give it a try, but I was hoping for something that doesn’t need to turn on tracing.

Thanks @JCR and @EricGT. I noticed the paper in one of the news links on the Swi-Prolog documentation, but I haven’t had time to read it yet. I should go find some :slight_smile:

@JCR That’s nice, clean, readable code, a joy to read! Thanks for linking to it and for linking to the examples. The meta-interpreter that prints the trace of the program to the console, in section 6.3 certainly fits the bill- but …it’s a meta-interpreter. I mean, it’s funny coming from me, since all mh PhD work is on an approach to ILP called “Meta-Interpretive Learning” but whenever I can hand off any computation to a Prolog engine, rather than a meta-interpreter, I prefer to do that, for efficiency first of all; and also because many of the programs I want to process are left-recursive. There’s basically two principled ways to deal with that: bottom-up evaluation (with a TP operator) that only works with datalog and SLG resolution. Then there’s a myriad of hacks that can be implemented, like limiting resolution depth etc. An efficent bottom-up interpreter is something that can take a great deal of work to do right. And I don’t know that I could sit down and implement SLG resolution in a meta-interpreter without spending a couple of months getting my head around it and doing nothing else. I’m slow!

To be clear, I have a bit of code already, that transforms each clause in a program to add a literal that records a clause index whenever that literal is resolved (so when the rest of the clause resolves successfully) but it uses the dynamic database which is evil, so I was looking for a clearner and more efficient alterantive.

I’ll look at your meta-interpreters carefully though, I really like the look of them. That’s not a meta-interpreter but I’m particularly interested in having a go with showSubsumes/1. How come you need that?

Anyway, nice work!

Edit: if you’re interested in learning theories from examples with ILP, I could point to some newer systems than Aleph that help address the incompleteness of inverse entailment. I’ll need to come back to you with links but some alternative systems to consider are: Metagol, Popper, ILASP (learning in ASP rather than Prolog) and my own Louise.

1 Like

Happy to hear that you might find that code useful. A lot of credit goes to people on this forum, Stackoverflow, Sterling & Shapiro’s The Art of Prolog, and Markus Triska’s The Power of Prolog. For me, this help has been essential for understanding meta-interpreters and writing them.

The idea is to have a predicate that evaluates whether a theory is more general than another, in line with Karl Poppers thinking around falsifiability. It will be interesting to see if this code will be useful or not in the future.

Thanks for the suggestions about ILP systems! I am really diving into this area at the moment. I read parts of you paper on Louise. I think this stuff is fascinating, but I do not understand all of it. In my next research project i want to try out ILP on psychological data. Being able to use FOL for theory representation opens up a lot of possibilities in psychology i think. Currently most people use mathematical models. The possibility to mix qualitative and quantitative information, nesting functions in predicates and have predicates with several arguments is a game changer I think.

I will also definitely look at Popper and ILASP. Thanks!

Have a nice evening!

1 Like

Ah, I see. I had a more careful look at the subsumes/5 predicate again. I’m not 100% sure what the “Input” argument represents but if I understand correctly, what this is testing is entailment, rather than subsumption in the logic programming sense - so I think you’re using “subsumes” more loosely, or according to a different formalism (er, I have to show myself clueless and ask if it stands for a term defined in Popper’s work).

In any case it’s interesting to me that you have a predicate that basically tests generality between theories, because that sort of thing is central to ILP. In short, entailment is typically used in ILP to test whether a learned program correctly represents a set of examples but also to select between competing theories by selecting the one that is more general. This sounds very much like what your program is for.

1 Like

@stassa.p Yes, I am using “subsumes” a bit loosely. subsumes/5 is nothing fancy and not as advanced as the stuff you do in Louise or another ILP system. It just uses subset/2 to find out if the goals entailed by subtheory are also entailed by supertheory; these sets of goals are two lists (one for subtheory and another for supertheory) of (supposedly) ground terms. So subsumes/5 cannot really come up with generalisations (at least i do not think so).

The INPUT argument is used to tell the meta-interpreters that certain goals hold, in addition to the facts in a program per se. The motivation for this mechanism is to be able to formulate a theory program free of information about specific examples that it might be applied to; specific examples are handed to the theory in the INPUT argument instead. So, with a simple example, if we had the theory

grandparent (X, Z):- parent(X, Y), parent(Y, Z).

we might want to hand it

INPUT = [parent(abe, homer), parent(homer, bart)].

It would be interesting to discuss these issues more with you. I’ll send you a PM :slight_smile:

Cheers/JCR

I just replied to your pm.

For the sake of others watching the discussion, I think the INPUT argument is what we’d call a background theory, or background knowledge, in ILP- so you are comparing two logic programs for generality with respect to a background theorey. Cool stuff.

1 Like

Thanks for this. I’m sorry to sound thick, but how would you do you use your sys_rule/3 to capture the mgu sequence? From what I can tell Swi’s rule/2 is an alternative to clause/2 for the new => head operator. The obvious way I can think of using it is in a meta-interpreter, to retrieve the body goals of a clause from the program database, is that what you mean?

Thank you Jan, that looks interesting. I wonder, what happens if you bind the head or body of append to a suitable atom? For example

?- member(H, [append([],[],[]), append([a], [b], [a,b]) ]),  sys_rule(append/3, H, B), write((H:-B)), write('.'), nl, fail; true.

Would this output a refutation sequence of each H atom?

(Edit: I mean, will there be any evaluation performed before writing? My intuition is that the answer is no.)

(Edit2: OK, sorry, I had a quick look at the StackOverflow question- I understand what you mean, the calls to sys_rule/2 and callable_property/2 are part of a meta-intepreter. I was trying to avoid doing that :slight_smile: (and the reason is efficiency primarily, and tabling even more primarily if that is even possible).

Thanks - I updated my comment above sorry for the confusion :slight_smile:

I could try with a meta-interpreter. It’s just that in the past I’ve found that in terms of efficiency, a hand-crafted meta-interpreter cannot beat the Prolog engine itself. And then again there is the problem of controlling infinite recursion specifically with left-recursive clauses.

Clause indexing is a consideration also of course.

I’ll have to think about your comment more carefully, I might be missing something important.

I see, thanks. I think I may be able to do something like that with Swi’s clause_property/2 - I don’t need to output variable names for what I’m trying to do, not even the clauses as such- all I need is the clause indices (I want to know what clauses in a program are found in a refutation sequence of an atom).

Again, thanks. I’ll have to think about this and hack at it a bit. I’ll reply here if I have something that seems to work.

…without using the dynamic database!

(Edit: on the face of it, I don’t like that it has to turn on the tracer. That’s a bit of a game-breaker. Is there no other way to access the state of the Prolog VM as it’s running? Stupid question?)

Yes, I mean the resolution refutation of a goal. A refutation sequence is the sequence of clauses in the refutation. But note that we’re refuting the goal with respect to a program, and I’d like to collect the clauses in the refutation sequence that are also in the program, so not their resolvents.

Let me try to formalise this.

Suppose we have a goal, G, and a set of definite clauses P. A refutation sequence of G with respect to G is a sequence of clauses C_0, C_1,…, C_n, such that C_0 = (not G), every C_i, where 1 =< i =< |P|, is a clause in P, and C_n is the empty clause.

If I didn’t bungle that up, that should help clarify what I mean. To give an example, suppose we have:

P = { p(a) }
G = ~p(x).

A refutation sequence of ~p(x) should be: {p(a)}.

The Tau Prolog diagram perhaps shows such a refutation sequence, along with the program P. I gather that {} is the program?

Oh, I see, I missed the negation of p(c) and I misunderstood the diagram.

What I mean by “refutation” is also what you say - informally, we prove a query is true by refuting its negation. Negation as failure is a special case where we consider a negated query to be false (therefore, the negated atoms of the query to be true) if refutation of the negation fails. It always does my head in a bit :slight_smile:

Fortunately, I don’t need to consider NAF for my use-case. I only need to process definite programs.

Yes, as you say, the problem is non-deterministic. I want to collect all successful refutation sequences, i.e. the ones that end at the empty clause - and only those.

I’m not sure if I may have a problem with infinitely many refutation attempts. I certainly had trouble with infnite refutations meaning that a program would “go infinite” typically because of left-recursion. Regressing from the refutation terminology, infinite resolution trees are a problem, as usual.

I’ve been able to avoid this with Swi’s very welcome SLG-resolution implementation. This is an important reason why I prefer not to use a meta-interpreter. It’s too much trouble controlling infinite recursion with a meta-interpreter and it’s always hacky - for instance, some of my code imposes a limit to the number of successful resolutions, but of course this cuts off completeness and it requires the user to set a “magic” number as a parameter. I could try to implement SLG-resolution with a meta-interpreter but that’s too much work and it’s already been done, in Swi.

I should say, my implementation so far works alright, it’s just that it uses the dynamic database, which I really hate (not for any ideological reason- only because I cause 80% of my most frustrating bugs when I use the dynamic database) and I would like to find an alternative.

I added an example to my original post. I hope this helps clarify my intent. I should have done that right from the start. I’m really sorry for the confusion!

(But I do blame the mind#%&k of resolution by refutation, especially when one considers NAF. It reminds me a little of the scene in The Princess Bride, with the vial of poison and the Sicilian and the turning table).

OK, sorry for the deleted post. My program needs a background theory and some examples (it can work if you put everything in the first argument but it makes more sense the way I do it below). Here’s how I’d have to set it up, with the background definitions from the example I added to my original post:

% _Ts is the input program.
% _BK is a background theory  of odd/1 and even/1.
% _Pos is your query atom, as a positive example of _Ts
% _Neg is an (empty) set of negative examples
% _Ss is the set of target predicate symbols - we want to collect clauses of only those predicates. 
?- _Ts = [(nat(X1) :- even(X1)), (nat(X2) :- odd(X2))], 
_BK = [(odd(A):-s(A,B),even(B)), s(s(X3),X3), even(0), (even(s(X4)):-odd(X4))], 
_Pos = [nat(s(s(s(0))))], 
_Neg = [], 
_Ss = [nat/1], 
prove_examples(_Ts, _Pos, _Neg, _BK, _Ss, _Ts_), print_clauses(_Ts_).
nat(A):-odd(A).
nat(s(s(s(0)))).
true.

Equally, if you give _Pos = [nat(s(s(0))) ] you should get:

?- _Ts = [(nat(X1) :- even(X1)), (nat(X2) :- odd(X2))], 
_BK = [(odd(A):-s(A,B),even(B)), s(s(X3),X3), even(0), (even(s(X4)):-odd(X4))], 
_Pos = [nat(s(s(0)))], 
_Neg = [], _Ss = [nat/1], 
prove_examples(_Ts, _Pos, _Neg, _BK, _Ss, _Ts_), print_clauses(_Ts_).
nat(A):-even(A).
nat(s(s(0))).
true.

And with _Pos = [nat(s(s(0))), nat(s(s(s(0))))],:

?- _Ts = [(nat(X1) :- even(X1)), (nat(X2) :- odd(X2))], 
_BK = [(odd(A):-s(A,B),even(B)), s(s(X3),X3), even(0), (even(s(X4)):-odd(X4))], 
_Pos = [nat(s(s(0))), nat(s(s(s(0))))], 
_Neg = [], 
_Ss = [nat/1], 
prove_examples(_Ts, _Pos, _Neg, _BK, _Ss, _Ts_), print_clauses(_Ts_).
nat(A):-even(A).
nat(A):-odd(A).
nat(s(s(0))).
nat(s(s(s(0)))).
true.

Does that make sense?

That makes sense, keeping in mind I haven’t ever tried to implement a Prolog engine and I don’t know how exactly it’s supposed to work. Shameful, I know.

It’s not a meta-interpreter. It’s a horrible hack. I’ve uploaded you a cut-down version along with some auxiliaries needed:

prove_examples.pl (12.8 KB)

Note that the output I showed you above was slightly off - I forgot that the input needed to be “encapsulated” (instead of p(a,b) —> m(p,a,b)). This is done in Louise to allow processing of second-order clauses. Anyway now that I edited the program a bit to make it stand-alone, it doesn’t output the two input examples nats^3(0)) and nat(s^2(0)) anymore. That’s probably a bug because these examples are included in the program and should be found by themselves in a refutation sequence, I think.

Note also that this version is not perfect, in particular it is incomplete and that’s because of the tabling. With tabling, the first successful refutation sequence found will be memoized and no others will ever be returned, if I understand tabling correctly. So this basically defeats my requirement for a non-meta interpretive solution that would take advantage of SLG-resolution :frowning:

Edit: turn on debug(remove_null) in Swi to get some logging if you test this.

You are right that a meta-interpreter is not hard to write. Certainly compared to all the code I’ve written for this. Maybe I should stick with a meta-interpreter after all.

Thanks, that’s a nice, clear insight.