Non Monotonicity/Defeasibility Strategies and Other Stuff

[This is a very long comment and I missed the rest of the conversation while writing it. Aplogies!]

Yes, that’s absolutely right as usual. It’s my fault of sloppy thinking and sloppy notation: ¬p is meant as an atomic Horn goal, not an atom (in the FOL, not Prolog, sense) with classical negation. It should be denoted as ←p. Thank you for pointing out my error to me.

With that correction in mind, then, if ←p is an atomic Horn goal, and p and ¬p is a complementary pair eliminated during resolution, then the truth of p is uncertain (here, ¬p is a negative literal in a Horn goal so it is correct to denote it with ¬).

More precisely, in that case the truth of p is “uncertain” under a closed world assumption (CWA). NAF under a CWA, as in SLDNF-Resolution, is non-monotonic, which means that if we add clauses to a definite program, its success set can not only increase, but also decrease in cardinality (and the same if we remove clauses). Then a positive literal p that previously had an SLD-derivation (equivalently, a goal ←p that previously had an SLD-refutation) may now no longer have one, and v.v.

And that is where the uncertainty comes from: we may be able to establish a contradiction between p and ¬p now, but we have no idea whether that will still be the case in the future. Thus, a contradiction indicates not inconsistency, but uncertainty.

You have to excuse me if this is still not terminologically spick and span, but I’m still feeling my way around both the ideas, and the notation.

Yes! In the framework above, the atom p of uncertain truth is to be abduced.

I have to say that I started fumbling in the dark towards all this after a conversation with Antonis Kakas in the Hook meeting on Cognitive AI at the Royal Society in London, in September this year, so it’s not an accident that I considered the role of abducton in all this.

I have to say also that I can only “grok” abduction, as well as deduction and induction, in the context of NAF and SLD-resolution - because that’s all I know! In that context, given a definite clause C and a Horn goal ←G: “deduction” is the derivation of a new positive literal p in the head of C; “induction” is the derivation of a new definite clause D of the same predicate as a literal in ←G; and “abduction” is the derivation of a new positive literal p (that is the complement of a literal) in the body of C.

I bring good news: not only am I Greek, but also, in pricinple, all three modes of inference can be performed with Good, Old-Fashioned, SLD-resolution and, in practice, with a suitably modified Prolog meta-interpreter.

Deduction, of course, is done with the standard, “vanilla”, Prolog meta-interpreter (and its many variants).

Induction is possible with the inductive Prolog meta-interpreters in Meta-Interpretive Learning (MIL), a form of Inductive Logic Programming (ILP), that include second-order definite clauses in resolution. You can find the most recent example of one such meta-interpreter in my MIL system, Louise:

The inductive meta-interpreter is prove/5 starting on line 465. You’ll recognise its structure as that of the “vanilla” Prolog meta-interpreter. prove/5 is tabled so it’s really SLG-Resolution. I think prove/5 is the simplest and easiest to understand implementation of MIL to date (but of course I’d think that). The original version is in Metagol:

Abduction of a positive literal p is possible when its complement, ¬p, is derived during the SLD-Resolution of a Horn goal ←G with a definite clause C. Hence, p is of uncertain truth because it is a contradiction under a CWA. This I think accounts for the philosophical perspective of abduction as uncertain inference from evidence (here, incomplete evidence).

Abduction is performed in the hybrid neuro-symbolic MIL system Meta-Abd described in this paper:

To clarify, the MIL component of Meta-Abd uses an inductive-abductive Prolog meta-interpreter to “invent” new atoms of predicates marked as abducible, similar to what you say above about inventing “facts” by abduction.

Having been introduced to the code of Meta-Abd by its author, Wang-Zhou Dai, I know how Louise’s currently inductive-only meta-interpreter can be modified to also perform abduction. I’ve known this for a while but I couldn’t find a clear motivation to make this modification. I guess I couldn’t understand why abduction is proposed as a form of invention, or uncertain inference. Well, I think I get it now, and I’m very interested in handling uncertainty in a purely symbolic framework, for the purpose of (machine) learning and not just reasoning. Hence my remarks above about wanting to “re-interpret all this in an inductive setting”. I mean in the setting of ILP with MIL.

Unfortunately, for the time being I’m not being paid by anyone to work on that so I don’t know when I’ll be able to do it. I might have to suffer for my art, I guess :confused:

1 Like