Debugging constraints

Another long-winded post, sorry about that.

From another thread:

I don’t want to mis-represent Peter, but I think what he’s talking about is that due to applied constraints, unification can fail in a predicate totally unrelated to the code that applied the constraint. If you consider silent failure a problem, then this seems much worse. And if you’re depending on errors to bail you out, that’s probably not going to help much either.

There are reasons why Alain Colmerauer introduced the idea of constraints way back with Prolog-II (freeze), and Markus Triska advocates that CLP(FD) aught to be used for all finite domain arithmetic, namely that doing so can restore some of the logical properties to Prolog programs. But beyond that, using constraints on hard combinatorial problems, e.g., Travelling Salesman class of problems, can yield huge gains in performance, factors of 100 or more on even relatively small problems. (I can provide details if anybody’s interested.) The main idea here is that instead of using a generate and test strategy and then check to see if it’s the best solution so far, you apply constraints before generating, which triggers early failure on those possibilities that can’t be better than solutions already found, i.e., early pruning of the search space.

Given you see some value in constraints based on the brief discussion above, there is another issue that can arise in using constraints. Depending on the domain of the constraint system being used, the whole process of constraint propagation can be quite opaque. It can be triggered at any time by a unification or the addition of a new constraint and it can take a considerable time until the constraint network arrives at a fixed point. In some cases it behaves like non-termination, even if such is not the case.

I’ve taken a bit of trouble in clpBNR to try and address debugging issues. For example you can set a watchpoint on a clpBNR constrained value which will optionally trace or enter the debugger whenever the domain of the variable narrows. But I’m interested in any ideas people might have that would make debugging with constraints easier. (Those from people who have experience, good and bad, with using constraints are especially welcome.)

Excellent questions and I have no simple answers. There is currently no “compare intervals” in clpBNR. Some issues:

  1. How would I compare two intervals? If they are disjoint, then one is clearly less than or greater than the other. However if they overlap there are subsets of values from each of the two intervals that satisfy all three comparison ops.
  2. Is, for example <, a test (results in no narrowing) or a constraint, which can result in narrowing. Currently it’s a constraint; it will fail if the interval domains narrow to the empty set.
  3. As constraints, < and > are unsound over reals since valid values can fall through the cracks between floating point values (bounds).

I can’t recall if Eclipse deals with this, and how, but so far I’ve avoided the question because I haven’t yet seen a need. There is a max/min constraint based on a possible requirement for sorting intervals for a resource scheduling algorithm, but I haven’t tackled it yet.

Just to put my comment in context … the problems I ran into were using CLP(FD) for type inferencing of Python code, using a variant of the sample code for attributed vairables. The proof-of-concept worked beautifully, but I ran into problems when extending it to real-world code, and eventually gave up on that approach.

There were 2 principal problems:

  1. When the finite-domain constraints resulted in an inconsistency, the only result was “fail”, with no indication of where the inconsistency was (there are a combinatorically enormous number of ways that the constraints might fail). It was very difficult to determine whether the failure was due to a bug in my code or a type error in the code that was being analyzed. We also tried a SAT solver but gave up on that when it resulted in millions of binary constraints.

  2. It’s not easy to extend finite domains with a lattice with an “error” term that contains information about the cause of the error and that allows the computation to continue. (Treating the types as part of a lattice with bottom (⟂) didn’t work and I couldn’t find any algebras that would do what I wanted.)

In the end, I changed to an approach that computed successive approximations of types until they reached a fixed point; this doesn’t need constraints but does depend on predicate success/failure in a few places – but ~90% of the code is deterministic and the remainder could probably be replaced by somewhat more verbose if-then-else.

Sorry to not be clear - this is type-inferencing of Python programs, using an approach often called “gradual typing”. It lacks a good theoretical basis (and probably there is no sound typing theory for Python, for example), but has practical uses. By “successive approximations”, I mean something like this: start with knowing nothing about a variable (i.e., it has type Any) and then creating a union of possibilities (e.g., one pass might infer that str is a possible type; a second pass might add List[str], so the resulting type is Union[str,List[str]], using mypy-style type annotations).

Is Any a finite domain, or is it open ended? Maybe just my my twisted perspective, but I think of constraints as based on narrowing of domains, i.e., they work by crossing out possibilities. Your Unions aren’t really constraints, the type could still be any number of other possible types, no? That really doesn’t “constrain” a possible solution. It’s more like you’re using attributes to store meta-information about the variable.

Put another way, that’s a perfectly good use of attributed variables but does it have anything to do with constraints?

I don’t think I see anything there that covers set comparison; lots of other set operations, but not compare. Did I miss something?

Any is a theoretically infinite (e.g., List[List[List[...]]] is a possible type; but even if it were finite, it would be difficult to enumerate.

That’s exactly how I originally thought about computing types; but in the end, I was unable to get this to work because of the difficulty in handling error situations (that is, where there appeared to be an inconsistency).

Exactly – which is why in the end I didn’t use constraints to compute the unions, but instead did successive refinements by adding possibilities rather than using constraints to subtract possibilities.

(This work was done about 8 years ago, and I’m a bit fuzzy on the details now.)

You asked the original question, but now I’ve lost sight of what it was.

Anyway, compare/3 in Eclipse is a syntactic comparison:

bounded reals
in ascending order (if bounds overlap, the order is by increasing lower bounds, then increasing upper bounds; if both bounds are the same, the two terms are considered equal).

Since the SWIP comparison doesn’t consider attributes AFAICT, clpBNR intervals will be compared and sorted as any other variable. As a library I can’t control this. Users are free to write their own syntactic compare routine and use it with predsort.

That sounds reasonable; constraints not a fit. So I’m not sure what conclusions, if any, I can draw from this.

So not true. =<, =:=, >= are fundamental constraints in any arithmetic system, and clpBNR is no different in that respect. But, in general, they are constraints, not tests.

The question then is " Is there a “compare” predicate and what does it do? ". Eclipse has a general compare between any terms, but the interval comparison on bounded reals is syntactic.

I’m not exactly sure what zcompare does. It looks to me like a predicate implementing something like:

zcompare(<,X,Y) :- X #< Y.
zcompare(=,X,Y) :- X #= Y.
zcompare(>,X,Y) :- X #> Y.

If so, clpBNR could do something similar with the caveat that < and > are unsound operations on non-finite domains. If X and Y were constrained to be integers, this caveat would not apply.

In any case, we’re getting off-topic here. Why are constraint based programs difficult to debug?

I disagree. Consider two real intervals (non-finite domain) X and Y. For X to be strictly less than Y the upper bound of X must be less than the lower bound of Y. Given the finite number representation in a computer, there are an infinite number of real values between any two values that are different. That means there are an infinite number of real values between the upper bound of X and the lower bound of Y that satisfy the constraint X<Y but can never be included in answers. That means “unsound” to me but you may have another term.

I actually haven’t seen any CLP(R) systems that admit to this. It may not be a practical issue by and large, but it’s a very important theoretical issue IMHO.

Anyway, back to topic.

I’m not about to download Eclipse to demonstrate what I think should be an obvious point. But here’s a clpBNR example, first with =< which is sound and then with <:

?- {X=<1},Z=9007199254740996r9007199254740997,compare(Op,Z,1),format("Z ~w 1\n",[Op]),X=Z.
Z < 1
X = Z, Z = 9007199254740996r9007199254740997,
Op =  (<).

?- {X<1}, Z=9007199254740996r9007199254740997, compare(Op,Z,1), format("Z ~w 1\n",[Op]), X=Z.
Z < 1
false.

So X is a constrained variable to be less than 1, and Z is a numeric value less than 1 as indicated by the result of compare. Yet unifying X and Z fails.

In clpBNR, < sets the upper bound of X to the largest floating point value lower than 1; not sure how else to interpret this. Z is between this value and 1, so it falls through the crack.

No, nothing nefarious going on: Z=9007199254740996r9007199254740997 (= means unification as you expect), so a precise numeric value less than 1 and greater than nexttoward(1,0). (And there are an infinite number of them in the domain of real numbers.)

That also means that Z cannot be precisely represented by a floating point value.

I don’t need to, and didn’t; nothing was fuzzed.

Don’t over-complicate this, it’s not complicated at all. Given:

?- {X<1}.
X::real(-1.0Inf, 0.9999999999999999).

Note the upper bound. It’s a floating point number as close to 1 as you can get given 64 bit floats. But reals are continuous, which means there are real values between this upper bound value and 1. (That rational value in the example is one on them.) And this means there are real values that satisfy the constraint of being less than 1 but are not included in X. Is there an upper bound value that would result in < being sound? The answer is no; that would require an infinite amount of memory since reals are continuous.

Therefore I conclude that the constraint {X<1} is not mathematically equivalent to “X is the set of all real values less than 1”. That’s what I mean by (mathematically) unsound. Note that =< is sound:

?- {X=<1}.
X::real(-1.0Inf, 1).

X is the set of all real values less than or equal to 1.

And < (and > and =\=) is sound over integers:

?- X::integer(-inf,inf),{X<1}.
X::integer(-1.0Inf, 0).

X is the set of all integer values less than 1.

I didn’t intend that this be a clpBNR discussion. Here’s the repo: https://github.com/ridgeworks/clpBNR. There’s a 100 page User Guide (largely tutorial) and clpBNR is available as a library through the SWI-Prolog packs page. It’s also written entirely in Prolog for anyone who wants to understand the details, or make their own version.

Why would I do that? Look, you asked for an example to prove the unsoundness of <. Converting a precise value (that rational) to a fuzzy interval with floating point bounds totally misses the point. That interval may contain the precise value but it isn’t the precise value.

Yes. In finite domains, this is often done by labeling. In non-finite domains it’s often done by a bifurcating search; solve in clpBNR and perhaps others. This effectively adds constraints until something fails in which case the variable’s domain can be reduced (narrowed). This is because constraint propagation is usually insufficient for reasons I’m not about to go into here.

When I reflect on this, it’s not clear that constrints were not a fit … but only for certain kinds of type inferencing; they could be a good fit for inferring types of parameters from how they’re used, for example. My mistake was in focusing on only one algorithm for type inferencing; I should have combined multiple algorithms (and also some compiler techniques, such as SSA transformation).

However, debugging the constraint solver would still have been daunting because of the large number of possible types (even if finite) and the large numbr of generated constraints; also, attr_unify_hook/2 is tricky (and SWI-Prolog’s might not even be sufficient, given that ECLiPSe does it a bit differently); and I still don’t have a clear idea as to how “error” should be represented as part of a finite-domain constraint. (Maybe Bill Older has some ideas? …)

I won’t pretend to understand the problem you were trying to address, let alone whether constraints are a help or not. But here’s my perhaps very narrow perspective on constraints:

  • Constraints defer unification of a variable until such time as the conditions are right, whatever that may be.
  • A constraint system defines a “domain of discourse” which specifies the possible value or set of values that the the variable can be unified with.
  • Constraints are incrementally applied, i.e., there can be more than one on a variable and they can involve more than one variable, e.g., dif.
  • Prior to final unification, the variables domain can narrow via the application of new constraints. If the domain narrows to the empty set, that’s failure. If the set narrows to 1 value, that normally triggers unification. Key word here is narrows in the same way that unification narrows the domain of a logic variable. (The early CLP folks -Jaffer and co.- suggested that that Prolog was just an instance of the general CLP language category.)
  • Failure, resulting in backtracking, removes constraints. Failure is good because it forces forces you down alternative paths with different constraints resulting in smaller domains containing the “answer”, whatever that may be.

To my way of thinking, the attr_unify_hook/2 is just the application of another constraint, one executed when the constrained variable is unified with something else. The complicated but unusual case in my experience is unifying two constrained variables which requires some kind of merging between the variables domains and their constraints. The simpler, more common case, is unification with some other term. This is usually fairly straight forward, just check if the term is consistent with the current domain and constraints (via propagation); if so, the unification succeeds, if not, it fails.

If I recall, there’s been some (heated?) debate about attr_unify_hook semantics, e.g., is it called pre or post unification, but I personally haven’t had any problems with the current SWIP definition. For me, it was just a matter of understanding what it did.

So to use constraints you need a domain of discourse and a set of constraint definitions that can be used to narrow that domain. From 10,000 feet through a fog it looks to me that a given variable should only have one type at a time, i.e., the type either unknown or it has a single value. (Or maybe not?) In the larger program context there may be alternative branches where it may have different types, but these solutions need to be collected in a findall or some such. And some alternatives may fail to infer the type so left as unknown. The question remains, what does “error” mean in this context.

Here’s a link to an abstract of one of Bill’s papers all about CLP(intervals) and lattice theory, … If you think it’s at all relevant (I have no clue), I can pursue it with Bill if you wish.
https://link.springer.com/article/10.1023%2FA%3A1009701608825

Only if a a fuzzy operation is applied, e.g, sqrt, otherwise bounds that start precise will stay precise. And once bounds are fuzzed, they can’t be defuzzed other than by a narrowing operation.

The simple answer is apply the constraint X=<4 and see if that’s inconsistent with the rest of the “rules” of construction. I just don’t understand the point of the question.

What can I say but “rubbish”. All comparisions are supported. At no time did I say it it didn’t have them; I said 3 of them were unsound over reals and went to extreme lengths to explain why, and even then it seems to have escaped you. And “defuzzification” has no meaning to me, so I have no idea what special support for it would even mean.

I think this particular discussion has reached a non-useful conclusion - it’s starting to feel like trolling to me, so I probably won’t respond further.

“Narrowing” happens when you add constraints - the number of possible values decreases (narrows).