Non Monotonicity/Defeasibility Strategies and Other Stuff

Hi all! It’s been a while since I’ve posted here. Life got busy and other projects took over.

My logic journey has taken me away from prolog for a while but I’m checking in her to see if anyone has knowledge of some of the domains I’m working in at the moment and can point me towards resources.

I’ve begun developing a library/system to handle reasoning about state through time for game scripting and trying to build an assistant for tabletop gaming. It’s taken me down some rabbit holes.

I’m currently exploring graph based logic/knowledge representation and non-monotonic/defeasible reasoning particularly in regards to reasoning about time.

If anyone has any resources or experience in any of this and is happy for me to pick their brain about it let me know!

At the moment I’m trying to find a way to introduce defeasible reasoning to Conceptual Graphs and do something akin to a Rete algorithm to continuously/reactively query an evolving knowledge graph, I’m hoping to sort of side-step the frame problem with defeasibility, essentially making all negation and state change with an invalidation mechanism of sorts. (Essentially assume the last state to be true unless/until there is something explicitly negating/retracting it.)

But there’s so little non-proprietary information and research that’s even adjacent to what I’m attempting and I’m just an arts kid with only a few classes of undergrad logic and comp sci classes under my belt without much knowledge of where to search. So I’m looking for people with more experience in academia and industry use of similar tools to chat to.

My biggest trouble so far has been not knowing terminology and past research to know what to Google. I was unknowingly fighting with the frame problem for 12 months before I even had a name for it and could then go and research it. So I think there might still be a big repository of information out there I just haven’t been able to find that someone might be able to help me find. It’s happened a few times in the process already where just knowing what to Google, be it the Rete algorithm or event calculus or the frame problem has unlocked a bunch of stuff that’s helped.

If you’re familiar at all with any of this and would be happy to have a conversation or even send me some resources please let me know!

2 Likes

If this were my problem I would look in some of the books by George Luger. While I don’t have all of his books or his most recent editions, I do have an earlier edition of “Artificial Intelligence: Structures and Strategies for Complex Problem Solving” by George F. Luger literally at my fingertips and use it for such questions. The book is not cheap but I have never regretted buying it.

A similar book but one that I don’t expect to be as useful but will have much needed terminology is “Artificial Intelligence A Modern Approach” by Stuart Russell and Peter Norvig. (WorldCat)

HTH

1 Like

Thank you! I’ll see if I can pick up some second hand copies of those somewhere!

I recently got a hold of Knowledge Representation by John F. Sows which has mostly just reinforced things I knew already but has been helpful for clarity.

No sense in buying those books just to have peek.

I would first suggest checking the table of contents which are typically made free on the publishers page or on sites like Amazon.

Then if there is something that looks of value use the WorldCat locations to find a library that you can visit and browse the copy there, books of this kind are typically at university libraries.

Also check your local public library to see if one can be borrowed via interlibrary loan.

Sometimes there are free legal PDF versions online, these often appear after an author retires and the publisher no longer plans on making more hard copies.

If you then decide the book is of value, buy a copy. There is one book on Lambda Calculus which has a free PDF version but I use it so often I bought the book and still use the PDF for a quick search but take the book with me when I know I will be sitting and waiting, think car service, morning commute, picking someone up at the airport, etc.

1 Like

I miss having access to my university library. But I’ll check to see if I can get them through my local library or the state library. I’m in Sydney so if there’s a copy in the country it should be fairly easy to get ahold of.

Using
https://www.worldcat.org/title/1085841470

with a location of Sydney, Australia

this is the first on the list

University of Technology Sydney
1.9 kilometers from your current location.
UTS Central (building 2), 61 Broadway
Ultimo, NS 2007

Sounds like your project is a little similar to something I’ve been tinkering with for a while. What got me into Prolog originally was doing a free online course given by Stanford University logic professor Michael Genesereth on General Game Playing.

Quite a big library of rules for games has been developed by this project which is linked to an annual competition, and the examples include several different ways of writing rules for chess, checkers, and a variety of puzzles.

Something that’s a bit frustrating is these are written in a Lisp-dialect called KIF (knowledge-interchange format) which is pretty identical to Prolog except predicate_name(some_atom, Variable1, ...) is written (predicate_name some_atom ?variable1 ...)

Genesereth got a lot of abuse from students in the courses’ discussion forums since he was introducing a huge subject to laymen, and Coursera dropped it, which I think very sad. He also offered an introduction to logic course which doesn’t seem to be available anymore. (Having dabbled myself in offering online courses on Coursera, I sympathise with Genesereth, and luckily his notes are still freely hosted by Stanford at the link I included above).

Using SWI-Prolog, I managed to write a tic-tac-toe player that’s completely unbeatable, and have been trying to use that as a basis for writing web-based chess, checkers etc games but it’s been work in progress for ages. A while back I put some notes on SWISH -- SWI-Prolog for SHaring

I also put some notes on a MoinMoin site, which broke because it was written in Python2 which Arch Linux no longer supports… I’ve been meaning to put that info back at my Prolog notes site, but it’s on my long todo list.

While I use Prolog as the “rules engine” which generates legal moves and next states from a current game state since thanks to the GGP project there are plenty of examples to translate from KIF, I’ve switched to using PostgreSQL to store the game tree and search it. I know @jan has been working at improving SWI-Prolog’s persistent database abilities, a key part is moving stuff out of RAM to disk since in most games, game-trees are huge.

A big problem with writing game-tree generators is the only nodes that can be valued correctly are terminals where you can say for a fact which player(s) won, lost or drew. Working up from there, parent nodes can be aggregated using minimax/and-or.

Snag is, usually the AI player is groping around in a twisty-maze for which there isn’t the time or disk space to explore to a terminal, so has to turn to heuristics to make some guess.

My theory is to generate far enough ahead to see to what extent opponents’ choices are narrowed. In tic-tac-toe, constantly forcing your opponent to block you getting three in a row guarantees a draw. I think this is known as focus in game-player jargon.

Once the choice of moves have been filtered to which constrains opponents most, the next step is to filter on mobility, which is which state gives whichever player is me most options. Regarding chess, this makes the queen the most valuable piece once on the open board. I think mobility yields pretty much the same valuation of pieces as the traditional pawn = 1, … queen = 8 system, plus gets an automata to position the pieces for lots of options without much knowledge of the game being played.

Anyways, it’s something I found a fascinating subject and has been very educational, though my game players still suck at anything complicated.

3 Likes

I’ve actually encountered KIF in my journey recently. I’ve just started looking at representing rules as Conceptual Graphs and the ISO standard representation for those is called Graph Based Knowledge Representation, which can easily be mapped to KIF.

I like GBKR as it maps quite neatly onto the kinds of ways tabletop and video game rules are written.

I’m lucky in that I don’t have to do exhaustive or efficient searching as I’m not yet trying to write a player AI. I only care about being able to explain how the game reached its current state, and reason about what is immediately possible as a choice for human players.

Glad to know that a resource of game rules in KIF is available though! If they map neatly into GBKR (which they may not as it is a subset of KIF) it could be a good way to test my system. Although it may also work just fine with KIF representations directly depending how I work it.

At the moment I’m looking at abductive reasoning methods. But I’m yet to crack it as I need to go and do some ground work to get familiar with existing algorithms so I can revise them to be reactive/continuously queryable.

My ADHD is just really struggling to do that as it makes the kind of boring piecemeal learning hard when I already grasp a lot of the broader picture. I think having someone to talk to about things and work through them collaboratively might help with that.

I actually started using ChatGPT for that last night as a test run and it almost worked. But the subject matter is a bit niche and it started to break down pretty fast.

1 Like

A good place to find game rules written in KIF is http://games.ggp.org/

What I’d consider the “Hello World” example is http://games.ggp.org/base/games/ticTacToe/ticTacToe.kif

I wrote a script to automate turning these into Prolog, which created this from the above KIF (actually an earlier version, I see it’s been neatened up since I last downloaded it – good to see some life remaining on the site):

:- module(tictactoe, []).

:- thread_local true/1, does/2.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Tictactoe
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Components
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

role(white).
role(black).

base(cell(M, N, x)) :- index(M), index(N).
base(cell(M, N, o)) :- index(M), index(N).
base(cell(M, N, b)) :- index(M), index(N).
base(control(white)).
base(control(black)).

input(R, mark(M, N)) :- role(R), index(M), index(N).
input(R, noop) :- role(R).

index(1).
index(2).
index(3).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% init
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

init(cell(1, 1, b)).
init(cell(1, 2, b)).
init(cell(1, 3, b)).
init(cell(2, 1, b)).
init(cell(2, 2, b)).
init(cell(2, 3, b)).
init(cell(3, 1, b)).
init(cell(3, 2, b)).
init(cell(3, 3, b)).
init(control(white)).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% legal
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

legal(W, mark(X, Y)) :-
  true(cell(X, Y, b)),
  true(control(W)).

legal(white, noop) :-
  true(control(black)).

legal(black, noop) :-
  true(control(white)).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% next
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

next(cell(M, N, x)) :-
  does(white, mark(M, N)),
  true(cell(M, N, b)).

next(cell(M, N, o)) :-
  does(black, mark(M, N)),
  true(cell(M, N, b)).

next(cell(M, N, W)) :-
  true(cell(M, N, W)),
  W \== b.

next(cell(M, N, b)) :-
  true(cell(M, N, b)),
  \+does(_, mark(M, N)).

next(control(white)) :-
  true(control(black)).
    
next(control(black)) :-
  true(control(white)).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% goal
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

line(X) :-
  true(cell(M, 1, X)),
  true(cell(M, 2, X)),
  true(cell(M, 3, X)).

line(X) :-
  true(cell(1, N, X)),
  true(cell(2, N, X)),
  true(cell(3, N, X)).

line(X) :-
  true(cell(1, 1, X)),
  true(cell(2, 2, X)),
  true(cell(3, 3, X)).

line(X) :-
  true(cell(1, 3, X)),
  true(cell(2, 2, X)),
  true(cell(3, 1, X)).

open :- true(cell(_, _, b)).

goal(white, 100) :-
  line(x),
  \+line(o).
    
goal(white, 50) :-
  \+line(x),
  \+line(o).
    
goal(white, 0) :-
  \+line(x),
  line(o).

goal(black, 100) :-
  \+line(x),
  line(o).

goal(black, 50) :-
  \+line(x),
  \+line(o).

goal(black, 0) :-
  line(x),
  \+line(o).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% terminal
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

terminal :- line(x).
terminal :- line(o).
terminal :- \+open.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


Stanford’s “GGP club” seems to have fizzled. I tried to join an online discussion group linked to somewhere on the site, but no luck (or there just isn’t any discussion). Most of the supporting software for KIF seems to be orphaned. The course used a compiler written in Java and involved having to install the Eclipse IDE, which is all very alien to me. I found simply doing it in Prolog much easier.

Would be nice to build a community of hobbiests interested in writing competing AI players which get given rules and and timelimits to play against each other as Stanford did, but using plain vanilla Prolog.

1 Like

https://groups.google.com/g/ontolog-forum has possibly useful information (search for “kif” or “gql”)

2 Likes

There’s a couple things your post brings to mind which I am not an expert on but have found interesting.

Answer Set Programming is definitely associated with non-monotonic reasoning. It has some interesting capabilities when it comes to modelling time and planning. This paper was neat.
https://personal.utdallas.edu/~gupta/csr-scasp.pdf Automating Commonsense Reasoning with ASP and s(CASP)*. s(CASP) is a swi-prolog library with similar but different capabilities to clingo https://potassco.org/ which is a standalone solver with something of the flavor of a SAT solver or minizinc if you’ve come across those. I believe there is a body of work connecting ASP to narrative production.

You may also find the work of Chris Martens interesting. Chris Martens Chris Martens - Publications They use logic programming (in particular linear logic programming) in different forms for narrative production in their thesis.

2 Likes

The logical negation (-)/1 of ASP isn’t non-monotonic, its monotonic. Thats actually
the advantage of ASP over ordinary Prolog, that it has a logical negation.

Then you can have ASP and add a further negation (not)/1, similar to negation as
failure in ordinary Prolog, which makes it then non-monotonic.

See also here, (-)/1 is called strong negation:

What Is Answer Set Programming?
In the context of logic programming, this idea leads to the need to
distinguish between two kinds of negation: negation as failure, used
above, and strong (or “classical”) negation, which is denoted in the
language of LPARSE by - (Gelfond & Lifschitz 1991).
https://www.cs.utexas.edu/users/vl/papers/wiasp.pdf

1 Like

I agree with what you’re saying. (-)/1 is monotonic and (not)/1 is non-monotonic. But from what I’ve been reading (I’ve been kind of fixated on this stuff the last couple days), the core insight ASP adds is in fact its treatment of (not)/1 and finding declarative meaning to negation as failure via the concept of stable model and relationship with non-monotonic logics such as default logic and autoepistemic logic (not that I know to much about these). The paper I linked above is a good reference on these points and also https://www.cs.utexas.edu/users/vl/papers/13defs.pdf 13 Definitions of Stable Models. By comparison, (-)/1 is almost a syntactic sugar convenience feature (although an excellent one) and I have not seen a reference describing it as the core feature of ASP.

2 Likes

These semantics also translate to ordinary Prolog. Ordinary Prolog is
just a special case of ASP, with the folllowing restrictions:

  • No disjunction in head (allowed in ASP).
  • No empty head (allowed in ASP).
  • No logical negation (allowed in ASP).

Or as the paper expresses it, traditional rules = ordinary Prolog:

These definitions are equivalent to each other when applied to
“traditional rules” – with an atom in the head and a list of atoms,
some possibly preceded with the negation as failure symbol, in the body:
A0 ← A1, . . . , Am, not Am+1, not An. (1)
https://www.cs.utexas.edu/users/vl/papers/13defs.pdf

The challenge is rather to lift these semantics to ASP, which were already
known for long for ordinary Prolog. But the Lifschitz paper isn’t exhaustive,
there are quite some more routes to understand ASP. For example a variant

of unit resolution, that only takes positive literals, might quite well explain the
logical negation of ASP and how stable models are generated. Thats also closer
to how ASP was invented and has less to do with circumscription.

The link to unit resolution tells us also a fundamental difference between ASP
and ordinary Prolog, namely that it is rather based on forward chaining, whereas
ordinary Prolog usually performs backward chaining. Despites its forward

chaining origins of ASP you can of course try to bring in some goal directed
behaviour, similarly like Prolog tabling is both forward and backward chaining.
You might also get lesser-ASP, if you restrict, for example if you don’t allow

disjunction in the head, as is done in s(CASP), and then replacing stable
models, by an abduction mechanism.

1 Like

Hey, have you had your mind blown by the s(CASP) paper, too? I sure have! It’s been a while since I read a paper so well-written, so clear, and so intellectually satisfying.

It’s the first time also that I found a clear motivation for ASP. In the past, I had heard it proposed as a purely declarative logic programming language with classical negation, but that sounded … a bit meh? I also couldn’t understand the execution strategy of grounding the entire Herbrand base which robs the language of expressivity (no lists!) and makes its execution NP-complete (because SAT-solving). Btw, I still don’t get that bit. Why?

In the s(CASP) paper instead I found a motivation of ASP as an elegant framework for reasoning with uncertainty, with certainty.

I have to explain this a bit: I’m coming from the Inductive Logic Programming (ILP) point of view where dealing with uncertainty is a lot more important than in ordinary, deductive logic programming. That’s becaue the real world is a dirty, noisy place, full of uncertainty and we want our machine learning systems to be able to deal with that. In ILP the standard approaches to reasoning with uncertainty are “magic” error tolerance parameters (like the “error” parameter in Aleph) or unholy mixtures of FOL with probabilities (there’s an entire field of Statistical Relational Learning which is all about that).

Magic parameters are magic, but the problem with probabilistic reasoning is that in representing uncertainty by probabilities, one loses the ability to represent certainty, even when certainty is there. As a trivial example, “1 + 1 = 2” and “1 + 1 = 3” must both be assigned a probability value, and the best one can say about these two statements in a probabilistic framework is that one is more likely than the other. And gods help us if the data from which probabilities are calculated is such that “1 + 1 = 2” is the less likely. Yet a proof can be derived of “1 + 1 = 2” from the axioms and theorems of arithmetic, which btw is the only context in which “1 + 1 = x” really makes sense as a statement, so there is no reason to have any uncertainty about it.

Probabilistic reasoning delendum est!

But what to replace it with? Here’s what the s(CASP) paper describes (see the numbered list in section 4.1, page 8), or rather, this is my re-interpretation of it in terms of SLD-resolution proofs with negation as failure (NAF):

Shade of truth          SLDNF representation
--------------          --------------------
p is certainly true     p← is an axiom 
p is probably true      ¬p has no SLD-derivation
p is of unknown truth   p and ¬p both have an SLD-derivation
p is probably false     ¬p has an SLD-derivation
p is certainly false    not_p← is an axiom and p← not not_p is a theorem

Where an “axiom” is what we call a “fact” in Prolog and a “theorem” is what we call a “predicate”, more correctly a predicate definition, i.e. a set of definite clauses. These basically replace ASP’s classical negation (I guess I still can’t get my head around that :P). I say “axiom” instead of “fact” to represent the er fact that all our knowledge is ultimately axiomatic and we can have no greater degree of certainty than the axioms of a theory. Opinion, that!

In any case, this blows my mind right through, not least because I’ve been looking for a formalism that could do all this, reason with uncertainty, with certainty, for a long time. John McCarthy has said that NAF is a simple way to represent uncertainty (Oral History of John McCarthy - YouTube) and now this guy Gupta shows how to do it in practice. Now all that remains for me is to re-interpret all this in an inductive setting. It feels like someone just tied rockets to my ankles.

4 Likes

Oh wow! I haven’t read it yet no, I’ve started to see some mentions of it around but thought I had mostly taken what I needed from ASP already. I might have to revisit!

Those 5 shades of truth reflect similar structures I’ve started using in my current approach. But in my approach they’ve been very intuitive and messy. Bringing a formalism like that to them could really help my understanding of what it is I’ve been trying to accomplish intuitively.

Might have so reading material for today now! Thank you! It’s been a while since I found something close enough to what I’m working on to be helpful.

Solving this so far has mostly been me chasing after an intuition finding systems and formalisms that approximate what I’ve been doing and slowly getting closer to what I need.

1 Like

Same thing here! But the more I look, the more I find that folks have thought of all that well before me and a lot of work has already been done. It’s just a very very tricky subject, if one is indeed looking for an elegant (simple, not cumbersome, intuitive) formalism.

Good luck!

Btw, just to be clear, my tabulation of the “shades of truth” and their SLDNF representations above are my re-interpretation of the s(CASP) paper’s explanation of ASP, and my re-interpretation is in the context of Good, Old-Fashioned, SLD-Resolution because that’s what I know. s(CASP) does something very similar, in that it adds new clauses to a Prolog program that represent classical negation, but it’s not exactly what I show above.

I can follow about half of that but a lot goes above my level of understanding.

What I can say is that in my use case there will often be cases where “p” and “~p” can be derived precisely because many of the systems I’m modelling aren’t entirely logically consistent or because their natural language reasoning isn’t easily (or usefully) reducible to logical cases and are defeasible based on some metric of specificity or case-by-case basis.

Allowing for p & ~p actually becomes a more efficient way of handling a lot of these cases. When that “state” for lack of a better term occurs we know something in the model is at odds with something else, and can introduce reasoning to handle it.

I treat them almost as hypotheticals. One of the rules I use to test whether I’ve got a robust enough system comes from Dungeons and Dragons 5th edition goes something like this.

When a half orc character is reduced to zero hit points, their player can instead choose to drop to 1 hit point instead once per long rest. (If you are unfamiliar a long rest is another game rule which has some preconditions.)

It’s important to note that 5th edition also uses the rather nebulous concept of “specificity” for defeasibility.

You could introduce some logic to make the resultant damage itself defeasible by introducing to all damage logic a check for whether a character is an orc and has chosen etc. but this requires rewriting that whole core damage logic if this rule were changed or introduced or running extra reasoning when it is never used.

Or, what I’ve found to be more elegant is to allow some predicate like a: hp(character, 0) and b: hp(character, 1) to both hold true, and then have the querying reasoner introspect the antecedents that led to that conclusion and see that either/and b can only hold true if a held true, or that b is some how a consequence of a. And so introduced a kind of “consequence of a hypothetical” meta-reasoning that can be employed when these conflicts take place. Or b) the tree of reasoning for b is longer and therefore more specific so a trumps b.

I don’t know how that maps to logic formalisms at all but it’s more efficient to compute (particularly with many continuous queries over steaming data sets).

It’s become quite an elegant way to handle values that change through time also. Rather than having to solve the frame problem or handle world state. You simply take the predicate of value at time, and assume the most recent value before the query time is the current one. If the time is unknown but you know the sequence of antecedent events that triggered the value change you can sequence them and check for conflicts or logical inconsistencies or branching consequence paths and handle them as they arise with defeasibility strategies.

I think part of why I’ve gravitated to Conceptual Graphs as my cornerstone for it is because the semantics of them map fairly closely to natural language. A lot of what I’m doing is connecting and mapping semantics onto each other in ontological space with some reasoning applied but only where it is needed.

And many of the games I’m modelling write their rules in that kind of natural language with a lot of exceptions and edge cases. And you need to be able to reconcile them on the fly much the same way you do as a human, rather than bake them into a singularly consistent, logically provable set. Especially with large rule sets that can be used piecemeal and altered, often during the run of a game, like in Dungeons and Dragons.

Early on I tried to account for this by having a lot of “this is a safe assumption until proven otherwise” clauses built into the rules. But that quickly became unwieldy and not future proof or modular enough when introducing new rules or building large rule sets. Instead it became far more reasonable to build the logic for handling and reasoning about exceptions into the reasoner itself.

It has also opened up a lot of room for proceduralism in specific common types of queries and defeasibility strategies to improve search efficiency.

For me it’s okay if my rules say p & ~p, because o also have rules for how to handle that. And if I ever don’t, I want the system to be able to explain its reasoning to a human so they can make a final decision on the truth.

It’s certainly not a robust solution for solving logical problems, but it’s a very good solution for my use case that avoids a lot of the pitfalls in defeasibility and temporal reasoning without having to worry about the frame problem and all the rest.

I wouldn’t even necessarily call what I’m doing anymore logic programming anymore so much as it is “reasonable” programming. Or even maybe some breed of natural language programming. It’s not so much trying to be provable as approximating the intuitive reasoning practices of a human tackling the same problem efficiently.

1 Like

[This is a very long comment and I missed the rest of the conversation while writing it. Aplogies!]

Yes, that’s absolutely right as usual. It’s my fault of sloppy thinking and sloppy notation: ¬p is meant as an atomic Horn goal, not an atom (in the FOL, not Prolog, sense) with classical negation. It should be denoted as ←p. Thank you for pointing out my error to me.

With that correction in mind, then, if ←p is an atomic Horn goal, and p and ¬p is a complementary pair eliminated during resolution, then the truth of p is uncertain (here, ¬p is a negative literal in a Horn goal so it is correct to denote it with ¬).

More precisely, in that case the truth of p is “uncertain” under a closed world assumption (CWA). NAF under a CWA, as in SLDNF-Resolution, is non-monotonic, which means that if we add clauses to a definite program, its success set can not only increase, but also decrease in cardinality (and the same if we remove clauses). Then a positive literal p that previously had an SLD-derivation (equivalently, a goal ←p that previously had an SLD-refutation) may now no longer have one, and v.v.

And that is where the uncertainty comes from: we may be able to establish a contradiction between p and ¬p now, but we have no idea whether that will still be the case in the future. Thus, a contradiction indicates not inconsistency, but uncertainty.

You have to excuse me if this is still not terminologically spick and span, but I’m still feeling my way around both the ideas, and the notation.

Yes! In the framework above, the atom p of uncertain truth is to be abduced.

I have to say that I started fumbling in the dark towards all this after a conversation with Antonis Kakas in the Hook meeting on Cognitive AI at the Royal Society in London, in September this year, so it’s not an accident that I considered the role of abducton in all this.

I have to say also that I can only “grok” abduction, as well as deduction and induction, in the context of NAF and SLD-resolution - because that’s all I know! In that context, given a definite clause C and a Horn goal ←G: “deduction” is the derivation of a new positive literal p in the head of C; “induction” is the derivation of a new definite clause D of the same predicate as a literal in ←G; and “abduction” is the derivation of a new positive literal p (that is the complement of a literal) in the body of C.

I bring good news: not only am I Greek, but also, in pricinple, all three modes of inference can be performed with Good, Old-Fashioned, SLD-resolution and, in practice, with a suitably modified Prolog meta-interpreter.

Deduction, of course, is done with the standard, “vanilla”, Prolog meta-interpreter (and its many variants).

Induction is possible with the inductive Prolog meta-interpreters in Meta-Interpretive Learning (MIL), a form of Inductive Logic Programming (ILP), that include second-order definite clauses in resolution. You can find the most recent example of one such meta-interpreter in my MIL system, Louise:

The inductive meta-interpreter is prove/5 starting on line 465. You’ll recognise its structure as that of the “vanilla” Prolog meta-interpreter. prove/5 is tabled so it’s really SLG-Resolution. I think prove/5 is the simplest and easiest to understand implementation of MIL to date (but of course I’d think that). The original version is in Metagol:

Abduction of a positive literal p is possible when its complement, ¬p, is derived during the SLD-Resolution of a Horn goal ←G with a definite clause C. Hence, p is of uncertain truth because it is a contradiction under a CWA. This I think accounts for the philosophical perspective of abduction as uncertain inference from evidence (here, incomplete evidence).

Abduction is performed in the hybrid neuro-symbolic MIL system Meta-Abd described in this paper:

To clarify, the MIL component of Meta-Abd uses an inductive-abductive Prolog meta-interpreter to “invent” new atoms of predicates marked as abducible, similar to what you say above about inventing “facts” by abduction.

Having been introduced to the code of Meta-Abd by its author, Wang-Zhou Dai, I know how Louise’s currently inductive-only meta-interpreter can be modified to also perform abduction. I’ve known this for a while but I couldn’t find a clear motivation to make this modification. I guess I couldn’t understand why abduction is proposed as a form of invention, or uncertain inference. Well, I think I get it now, and I’m very interested in handling uncertainty in a purely symbolic framework, for the purpose of (machine) learning and not just reasoning. Hence my remarks above about wanting to “re-interpret all this in an inductive setting”. I mean in the setting of ILP with MIL.

Unfortunately, for the time being I’m not being paid by anyone to work on that so I don’t know when I’ll be able to do it. I might have to suffer for my art, I guess :confused:

1 Like

Regarding temporal reasoning, treating NAF as a form of non-monotonic reasoning with uncertainty, also implies a temporal dimension to reasoning: we assume that “so far” what we know to be true, is true, and we derive all the logical consequences of our current assumptions, but we allow for future information to change our beliefs. Thus, we avoid the risk of being trapped in a world of bad assumptions.

Hah! When you said you’re developing a game scripting system, I thought you meant computer games, not board games. While I don’t play much anymore, I’ve done a bit of work with Magic: the Gathering. I’m interested in all aspects of this, with respect to machine-learning (with ILP): learning the rules, learning the language on the cards (“ability text”), learning to play and so on.

1 Like