Changing results with tabling

Hello all.

I have an implementation of Gordon Plotkin’s program reduction algorithm and I would like to use Swi’s tabling to control left-recursion. Briefly, Plotkin’s algorithm reduces a set of clauses by throwing out each clause that is entailed by the rest of the program. In my implementation, entailment is decided by resolution. In practice, that means call/1.

I would like to apply the algorithm to programs with possible left-recursions, without having to jump through hoops to avoid evaluation “going infinite”. I do that with a depth-bounded meta-interpreter but I was hoping to drop all that (dirty and slow code) and use Swi’s swanky new tabling facilities instead.

Unfortunately, it seems that when I try, some of the algorithm’s results change and clauses are found redundant that previously weren’t.

The following is some output from the implementation. It should be self-explanatory:

% Without tabling - corect reduction

Program clauses:

Program reduction:

Redundant clauses:
% With tabling - incorrect reduction

Program clauses:

Program reduction:

Redundant clauses:

I’ve tried to isolate the effect as much as possible and created a github repository with the bare-bones of the code that implements the algorithm (plus comments) and also some unit tests. Here:

The 20 (unblocked) unit tests in the repository (in program_reduction.plt) all pass without tabling, but 13 fail when tabling is enabled, because too many clauses are found to be redundant. Blocked tests “go infinite” without tabling.

To disable and enable tabling, lines 183 and 189 in the file can be commented or uncommented.

I’d appreciate some help to understand what is going on here.

Thanks for the help!

Edit: forgot to say: I’m onn version 8.1.26, 64 bits and windows 10.

For those looking for the thesis paper that explains the algorithm.

“Automatic methods of inductive inference” by Gordon D. Plotkin (pdf)

I have not read it, but just providing the link to save you time.

Thank you Eirc. I have also transcribed the algorithm in the README file of my project and added some, hopefully clarifying, comments in the main module of the implementation,

Not related to the question you asked but your code contains:

	forall(member(C, Cs)

This can be simplified to:

        member(C, Cs)
        (numbervars((C),0,_), writeln(C))

The forall/2 predicate uses negation and thus binding of variables by the call to numbervars/3 will not be reflected in the list_clauses/1 argument.

Thank you Paulo, well spotted. The code in my implementation is a bit stale :slight_smile:

Edit: Also, btw, my code is full of asserts and retracts that I absolutely hate (the dynamic database is evil!) and I’ve tried to get rid of them but it’s very difficult to do without severely degrading the efficiency of the implementation. For instance, I tried to store the program in a list and find clauses with member/2 (rather than finding them in the database with clause/2). But that causes all manner of headache. Plotkin’s algorithm is an algorithm that modifies a program and I don’t really know any other way to do this except a) use the dynamic database (fast but dirty) or b) keep the program in a list (clean but slow). If you or anyone else has better ideas I’m all ears.

Can’t we use coinduction to detect the redundant clauses? If we get coinduction success while proving a goal, it means that a using a clause resulted in the same goal we’re proving.

For the example you posted above, I get the second clause reported as redundant. For the kin_a example, I get clauses 3, 6, and 7 listed as redundant (that’s one of the two possible solutions you list).

Hi Paulo. I must confess this is the first time I hear about coinduction so I’ll need to read up on it a bit.

How did you test the clauses you discuss above? Did you use the coinduction library in Swi-Prolog?

It may just be a crazy idea. If so, I will blame it in the virus that is running around! :stuck_out_tongue:

I re-encoded the problems as follows:

:- set_logtalk_flag(hook, clause_activation).

:- object(example).

	:- public(m/3).

	m(_A,B,C) :- m(_D,B,C).
	m(_A,B,C) :- m(_D,C,B).

:- end_object.
:- set_logtalk_flag(hook, clause_activation).

:- object(kin_a).

	:- public([
		m/2, m/3


	m(P1,X1,Y1) :- m(_Q1,X1,Y1), m(fm,P1).
	m(P2,X2,Y2) :- m(_Q2,Y2,X2), m(fm,P2).
	m(child,X3,Y3) :- m(father,Y3,X3).
	m(child,X4,Y4) :- m(mother,Y4,X4).
	m(parent,X5,Y5) :- m(father,X5,Y5).
	m(parent,X6,Y6) :- m(mother,X6,Y6).

:- end_object.

The clause_activation hook object (that expands the examples) is defined as:

:- object(clause_activation,

	:- private(counter_/1).
	:- dynamic(counter_/1).

		(:- object(Atom)),
		[(:- object(Compound)), (:- public([counter/1, head/2]))]
	) :-
		Compound =.. [Atom, _],

		(:- public(Predicates)),
		[	(:- public(Predicates)),
			(:- coinductive(Predicates)),
		 	(:- discontiguous(Predicates)),
		 	(:- discontiguous(head/2))

		(:- end_object),
		[Active, counter(N), (:- end_object)]
	) :-
		Active = (
			active(Clause) :-
		N is M - 1.

		(Head :- Body),
		[	(Head :- active(N),Body),
			head(N, Head)
	) :-
		M is N + 1,

	term_expansion(Fact, (Fact :- active(N))) :-
		M is N + 1,

:- end_object.

I also rewrote the algorithm that you implemented as:

:- object(program_reduction).

	:- public(report/1).

	report(Name) :-
		Example =.. [Name, _],
		integer::sequence(1, N, Indexes),
		program_reduction(N, Name, Indexes, [], Redundant),
		write('Redundant clauses: '), write(Redundant), nl.

	program_reduction(0, _, _, Redundant, Redundant) :-
	program_reduction(N, Name, Indexes, Redundant0, Redundant) :-
		list::selectchk(N, Indexes, Rest),
		subsumed(Name, N, Rest),
		M is N - 1,
		program_reduction(M, Name, Rest, [N| Redundant0], Redundant).
	program_reduction(N, Name, Indexes, Redundant0, Redundant):-
		M is N - 1,
		program_reduction(M, Name, Indexes, Redundant0, Redundant).

	subsumed(Name, Index, Indexes):-
		Example =.. [Name, Indexes],
		Example::head(Index, Head),

:- end_object.

I use the following loader.lgt file to load the code:

:- initialization((
	set_logtalk_flag(report, warnings),
	set_logtalk_flag(optimize, on),
	logtalk_load([clause_activation, program_reduction])

Running the code:

$ swilgt
?- {loader}.

?- {example, kin_a}.

?- program_reduction::report(example).
Redundant clauses: [2]

?- program_reduction::report(kin_a).
Redundant clauses: [3,6,7]

Note that the code uses no tabling, no grounding of clauses, no dynamic predicates, no asserts, no retracts. It does use some dynamic binding when sending messages to the example that is being processed. The deactivation of clauses to check if they are subsumed does use a linear scan on the list of clause indexes, which, with a large number of clauses, may be a performance concern. Coinduction support is provided by Logtalk itself.

Anyway, this solution may not work for all your examples. But it may still provided you with some ideas (based on term-expansion) on how to avoid using the dynamic database.

Have fun. I had :slightly_smiling_face:

P.S. The code is fully portable and and will run as-is with several backend Prolog systems.

1 Like

Hi Paulo,

Thank you, that’s very nice of you. I’ll have to get my head around coinduction first before I can understand how it works, but why do you say your code uses no asserts or retracts? If I understand the Example::counter(N) syntax correctly, the code initialises a counter for the clauses of an example by a call to a term_expansion/2 hook that uses retract/1 and assertz/1, no? Apologies if that sounds dumb, I stay away from term_expansion/2 as I find it can make code harder to debug later (I use the text-based debugger always).

The cost of the linear scan could be a problem but my (complete) implementation is already the worst bottleneck in my entire project (the implementation fits into a larger system) so anything that improves even slightly upon it would be a good thing.

Anyway I’ll see what I can learn from your code. Thank you for taking the time to write it! I’m happy that it entertained you :slight_smile:


The hook object indeed uses its internal database for term-expanding the example files. But the results of the expansion, which is what’s actually used by the program reduction code, are static. The Example::counter(N) message uses necessarily dynamic binding, as Example is only know at runtime. But the counter/1 predicate is static.

It’s possible to do the expansion without using any dynamic predicates but it requires the example predicates and the generated head/2 predicate to be declared discontiguous. The counter/1 predicate can be defined to simply call the Logtalk reflection API at runtime to find the number of clauses of the example.

That linear scan occurs on the list::selectchk(N, Indexes, Rest) goal in the program_reduction object (Indexes is a list of integers). I would estimate that its inherent performance cost should be considerably smaller than all the dynamic database handling and clause grounding that you’re doing in your code.

One limitation in my code is that all example predicates must be defined. From skimming over your examples, that’s not always the case. Still, if coinduction proves to be a viable solution for a subset of the problems, it may still be worthwhile.

You’re most welcome. Let me know if you have further questions about the code.

1 Like

Separate to Paulo’s kindly response, I just wanted to clarify: is SLG resolution expected to change the results of call/n when dynamic predicates are involved? The way I understand it, this should be prevented by the use of as dynamic or as incremental (or a combination of both). Is the behaviour I see the expected behaviour?

I’m a little busy to look at your code. There are surely many ways to mess up tabling results when using dynamic predicates. The dynamic as incremental is not intended for tabled predicates modifying their dependent dynamic predicates during their computation. It is intended as an “external world”:

  • You have some state for the dynamic predicates
  • You compute some tables (solve tabled goals) than depend on these predicates
  • After completion, you change the dynamic predicates
  • Now you want to use the result of the tabled computation again. The as incremental ensures
    the initial tabled computation builds an IDG (Incremental dependency graph) that links
    dynamic predicates to affected tables and tables to the tables they affect. The assert/retract
    invalidates all affected tables and asking for these tabled results will re-evaluate invalidated tables.

Tabling requires nice declarative code. A little more precise, the evaluation of a tabled goal must run without its set of answers being affected by cuts, exceptions, assert/retract (or more general changes to the environment that affect the set of answers).

Hope this helps (might be a bit cryptic …)


[Too many edits on this, sorry]

Thanks, Jan. You may not have had time to look at my code but your explanation helped me make some progress.

I think my original code fell under “modifying dependent dynamic predicates during computation”. My implementation first asserts a program, then retracts each of its clauses in turn, then re-asserts their body goals, then finally re-asserts the entire clause (but not always).

I was tabling the predicates of the entire program (i.e. the functor and arity of each literal in each clause of the program) before all this asserting and retracting which I’m pretty sure violates all assumptions in “tabling of impure predicates” (not to mention all assumptions of common decency in logic programming and beyond).

All I really want to do is ensure that, when I assert the body goals of a clause and then call its head, evaluation does not enter an infinite recursion. For that, I only need to table the predicates of body goals after asserting them. Sorry if that’s doesnt’ make sense without looking at my code! It seems to have solved many of my problems anyway.

My code is still dirty as all hell though and so keeps raising some errors and going infinite (I’m missing something clearly, but it’s common when I use the dynamic database that I have no idea what my program is really doing, which is why I always try to avoid it). Anyway, I think I can take it from here.

Thanks to Jan and Paulo for their valuable input.

1 Like