Non Monotonicity/Defeasibility Strategies and Other Stuff

Hey, have you had your mind blown by the s(CASP) paper, too? I sure have! It’s been a while since I read a paper so well-written, so clear, and so intellectually satisfying.

It’s the first time also that I found a clear motivation for ASP. In the past, I had heard it proposed as a purely declarative logic programming language with classical negation, but that sounded … a bit meh? I also couldn’t understand the execution strategy of grounding the entire Herbrand base which robs the language of expressivity (no lists!) and makes its execution NP-complete (because SAT-solving). Btw, I still don’t get that bit. Why?

In the s(CASP) paper instead I found a motivation of ASP as an elegant framework for reasoning with uncertainty, with certainty.

I have to explain this a bit: I’m coming from the Inductive Logic Programming (ILP) point of view where dealing with uncertainty is a lot more important than in ordinary, deductive logic programming. That’s becaue the real world is a dirty, noisy place, full of uncertainty and we want our machine learning systems to be able to deal with that. In ILP the standard approaches to reasoning with uncertainty are “magic” error tolerance parameters (like the “error” parameter in Aleph) or unholy mixtures of FOL with probabilities (there’s an entire field of Statistical Relational Learning which is all about that).

Magic parameters are magic, but the problem with probabilistic reasoning is that in representing uncertainty by probabilities, one loses the ability to represent certainty, even when certainty is there. As a trivial example, “1 + 1 = 2” and “1 + 1 = 3” must both be assigned a probability value, and the best one can say about these two statements in a probabilistic framework is that one is more likely than the other. And gods help us if the data from which probabilities are calculated is such that “1 + 1 = 2” is the less likely. Yet a proof can be derived of “1 + 1 = 2” from the axioms and theorems of arithmetic, which btw is the only context in which “1 + 1 = x” really makes sense as a statement, so there is no reason to have any uncertainty about it.

Probabilistic reasoning delendum est!

But what to replace it with? Here’s what the s(CASP) paper describes (see the numbered list in section 4.1, page 8), or rather, this is my re-interpretation of it in terms of SLD-resolution proofs with negation as failure (NAF):

Shade of truth          SLDNF representation
--------------          --------------------
p is certainly true     p← is an axiom 
p is probably true      ¬p has no SLD-derivation
p is of unknown truth   p and ¬p both have an SLD-derivation
p is probably false     ¬p has an SLD-derivation
p is certainly false    not_p← is an axiom and p← not not_p is a theorem

Where an “axiom” is what we call a “fact” in Prolog and a “theorem” is what we call a “predicate”, more correctly a predicate definition, i.e. a set of definite clauses. These basically replace ASP’s classical negation (I guess I still can’t get my head around that :P). I say “axiom” instead of “fact” to represent the er fact that all our knowledge is ultimately axiomatic and we can have no greater degree of certainty than the axioms of a theory. Opinion, that!

In any case, this blows my mind right through, not least because I’ve been looking for a formalism that could do all this, reason with uncertainty, with certainty, for a long time. John McCarthy has said that NAF is a simple way to represent uncertainty (Oral History of John McCarthy - YouTube) and now this guy Gupta shows how to do it in practice. Now all that remains for me is to re-interpret all this in an inductive setting. It feels like someone just tied rockets to my ankles.

4 Likes