Understanding computational complexity of CHR head selection

I’m a bit confused about term indexing in CHR constraints.

The question I have stems from a sentence in the famous CHR tutorial slides in which it is asserted (slide 151) that

foo(a, Y), foo(a, Z) ==> more stuff..

means:

foo(X,Y), foo(W,Z) ==> X==W | more stuff...

I’d like to know if that equivalence is true only in a semantical sense, or also in an operational one. In the first case, I can call foo(a,X), foo(a,Y) and pay a price that goes with the square of the number of as, in the second case, I would have to pay a price proportional to the square of foo/2 constraints.

In other words, can we expect the usual good indexing behavior that we have in prolog also in CHR rules?

Here’s is an experiment to test a couple of possibilities:

:- use_module(library(chr)).

:- chr_constraint foo/1, twice/1.
foo(X), foo(X) ==> twice(X).

prepare_foo :-
    numlist(0,10,Xs),
    maplist(foo, Xs).

study_foo :- chr_notrace, prepare_foo, chr_trace, foo(6).

:- chr_constraint bar/1, adjacent/2.
bar(X), bar(Y) ==> X #= Y+1 | adjacent(X,Y).

prepare_bar :-
    numlist(0,10,Xs),
    maplist([X]>>(X1 is 2*X+1, bar(X1)), Xs).

study_bar :- chr_notrace, prepare_bar, chr_trace, bar(10).

It seems to me from reading the chr_traces that the CHR store is never trying all the constraints before finding the right rule. Does this mean that the CHR store is indexed in this case, or am I reading this wrong @jan ?

Don’t ask me :slight_smile: The CHR version in SWI-Prolog is mostly the PhD Thesis work of Tom Schrijvers. He surely wrote publications on it :slight_smile: But yes, AFAIK the store uses a hash index to find candidate constraints. Don’t ask for the details …

2 Likes

Thanks, wasn’t sure who to ping! Pinging @TomSchrijvers then! :blush:

Hi meditans,

To answer your initial question, the equivalence of the behavior mentioned in that presentation does not concern performance.

My CHR compiler uses some indexing structures, but when and how they are used is quite different from Prolog.

Is there a particular use case you are concerned about?

Best,

Tom

1 Like

Hi, thank you very much for your answer.

My question stems from the fact that I lack a clear operational model, an thus I often fear of writing non-performant code.

The examples I have come from the bottom-up parsing world, in which constraints in the head are deterministically related, but not simply by variable unification. Something like:

foo(X), foo(Y) ==> X #= Y+1 | more...

or

foo(X), foo(Y), foo(Z) ==> computation(X,Y,Z) | more...

in which X determines Y and both together determine Z in computation.

In this case above, for example, would I be paying a cost proportional to the cube of the number of foo constraints?

Another example: imagine I have two types of points in the (lattice) 2D plane, a/1, and b/1, and I want to do something based on adjacency. For example, I might have:

a(X1-Y1), b(X2-Y2) :- adjacent(X1-Y1, X2-Y2) | more...

obviously, given an a, there is only a certain number of b locations that I should check, but does the code do that? Or does it try every combination (which could be unacceptable for some problems)? And how do I determine which is the case?

Bonus question: is there a mechanism with which I could decide dynamically which rules are active in a chr computation? Or, start two computations with the same constraints but slightly different rules?

1 Like

@TomSchrijvers in case you missed the reply above :slightly_smiling_face: