On sorting non-ground terms and the standard order of terms

But for both tasks you need to be able to find sub terms inside a term and replace them.

Oh, I see- so that’s what unification is/can be used for? And that’s Robinson’s unification algorithm, like in Prolog? I understand there’s other ways to do unification, though I’m not familiar with them.

So, how does one use Knuth-Bendix, in practice? What’s it for?

Apologies- I didn’t mean to imply you’re a newbie. I hope that’s not what my comment comes across as!

Yes, it’s very useful to talk with people who have slightly different backgrounds- but not too different because then it’s hard to find common ground.

Personally, I started with Prolog during my degree in CS, a few years ago. I approached it completely empirically and without too much care about theory etc. As I kept reading Prolog textbooks I inevitably picked up bits and pieces here and there, and of course I had to learn a lot more for my PhD, but my theoretical background is still full of holes like Swiss cheese.

One problem is that there is so much background. ILP starts around the '90s with only a few scattered pieces of work before that (including a paper by none else than Ehud Shapiro on the model inference problem, from 1981). Logic programming in general goes way back, to the '70s. Existing Prolog textbooks do a good job of describing the language, but other than Lloyd and the introductory sections of “Foundations of Inductive Logic Programming” (Nienhuys-Cheng and de Wolf, 1991) I haven’t read anything else on the theory of logic programming itself.

Then there’s all the new developments in logic programming, like ASP, say, or even constraint programming (it’s newer than Prolog itself, I mean). Where to even begin?

1 Like

Not at all.

That is one of the things I like about this forum, everyone is more professional about how they communicate and respectful.

1 Like

Happy New Year!

This surprises me, because terms like ability([destroy,target,'Serpent'|_A],_A) and ability([destroy,target,'Serpent'],[]) are not syntactic variants, so you do need some kind of unification or subsumption check in this case, like

?- subsumes_term(ability([destroy,target,'Serpent'|_A],_A), ability([destroy,target,'Serpent'],[])).
true.

Are you sure the difference lists shouldn’t be grounded before this stage is reached?

In any case, here’s an idea for intersection modulo syntactic variance:

:- use_module(library(apply)).
:- use_module(library(ordsets)).
:- use_module(library(varnumbers)).

variant_intersection(X, Y, Z) :-
    copy_term(X, Xc),
    copy_term(Y, Yc),
    maplist(numbervars, Xc),
    maplist(numbervars, Yc),
    sort(Xc, Xs),
    sort(Yc, Ys),
    ord_intersection(Xs, Ys, Zg),
    maplist(varnumbers, Zg, Z).

Examples:

?- variant_intersection([f(X)], [f(Y)], Z).
Z = [f(_2756)].

?- ord_intersection([f(X)], [f(Y)], Z).
Z = [].

Hi Michael,

Happy New Year to you too!

Oh dear. You’re absolutely right, I’ve been throwing around the term “alphabetic
variant” with abandon but I’ve been using it wrong. I somehow convinced myself
that two expressions being alphabetic variants means that variables in one
expression can be substituted for any kind of term in the other expression (so
unification), but in variants only variables are renamed. I just checked my
go-to reference, Foundations of ILP and that’s the definition.

Many apologies, I keep doing that sort of thing and it spreads confusion and
makes people mad at me.

To clarify, it’s like you say: I want a unification or subsumption test.
Unification, rather, because what I really want to test is some notion of
equality between an example atom and an atom in the success set of a learned
program (“atom” in the general FOL sense, not in the Prolog sense).

So I can’t use your intersection modulo syntactic variance. Thanks for the
suggestion and again, sorry for the confusion- I hope you’re not mad at me.

I would like to ground the success-set list. But I can’t think of a good way
to do this for lists of arbitrary (Prolog) terms that would still allow the
atoms in the success set to unify with atoms in the set of examples.

To give an example, say that, below, Rs is the success-set of the learned
hypothesis, sorted somehow, and Pos is the set of examples, assumed ground and
sorted in the ordinary way. For the purpose of measuring the accuracy of the
learned hypothesis I would want the size of the intersection of Rs and Pos to be
1. Grounding Rs by skolemisation (with numbervars/1) would make Pos and Rs
“unequal” under unification, so the size of their intersection would be 0 and
the learned hypothesis would look less accurate than it really is (because a
hypothesis that generates Rs, entails Pos, so it should be found to be accurate
at least for that one example).

Rs = [ability([destroy,target,'Serpent'|_A],_A)]
Pos = [ability([destroy,target,'Serpent'],[])].

Ultimately, I could “just” assert the learned program and call each positive or
negative example as a goal and count how many calls succeed. This was my
preferred method to evaluate learned programs until recently. But I think
there’s something attractive in defining accuracy (and other measures of
learning quality) in terms of set operations. And anyway there must be a good
way to do this.

Edit: reading through my first posts again, I see that I used “alphabetic
variant” correctly (or at least I appeared to do so) in my first few posts in
this thread, since I only give examples of lists with actual variants. So up to
that point, although I still had the wrong meaning of “variant” in my head, the
discussion should make sense. I really started making a mess of things when I
started discussing set operations. Again, apologies for that and I hope it
doesn’t utterly confuse anyone who finds this post later.

I haven’t been following this thread in detail, so the following remark might not be relevant.

I’ve found that building a different kind of unification on top of Prolog’s unification to be very tricky, and it’s often better to build it separately, as a kind of interpreter. You can use “unification hooks” on attributed variables, as mentioned by @ j4n_bur53, but there are some subtle limitations in the implementation that haven’t been fully resolved in SWI-Prolog – ECLiPSe apparently is better for this but it can still be very tricky (e.g., ==/2 for attributed variables).

If instead you transform the terms so that everything is wrapped with its “type”, then life can be easier, and you can write your own unification with that. For example, change foo(1,a,X) to term(foo,[number(1),atom(a),var('X')]) (or some variant on numbervars for X), and then you can have rules such as the following, where you know that your “wrapped terms” don’t have any Prolog variables in them:

my_unify(number(X), number(X), Bindings, Bindings).
my_unify(atom(X), atom(X), Bindings, Bindings).
my_unify(var(Var), atom(X), Bindings0, Bindings) :- add_binding(Bindings0, Var, X, Bindings)
my_unify(term(Name, Args1), term(Name, Args2), Bindings0, Bindings) :-
    my_unify_list(Args1, Args2, Bindings0, Bindings).
... etc. ...

Hi Peter, Happy New Year.

It looks like I could do what I wanted with Prolog’s unification. But that typed unification seems interesting. I guess it would suffice to use var/1, number/1, atom/1 etc to find the types of terms. Could work.

For the record, I think I’ve got my code to a point where it’s behaving as I expect it, even if I am not great at explaining what that means. I basically used Boris’ suggestion of unifiable_compare/3 and er, copied all the ord_intersection/3 and ord_subtract/3 code from library(ordset) and replaced the calls to compare/3 with calls to unifiable_compare/3. I’ll have to figure out what that means in terms of the license I need to distribute with my code, I think.

Cheers for the help!