clpBNR: B or B

I was playing around with clpBNR and leap years and found the following:

:- use_module(library(clpBNR)).

% constraints-only version
leap(Y) :-
   Y::integer(1583,_),
   [Q0,Q1,Q2]::integer(1,_),
   [R0,R1,R2]::integer(0,_),
   {   (  Y == Q0*4   + R0, R0 == 0 ) and       % multiple of four and
       ( (Y == Q1*100 + R1, R1 >  0, R1 < 100 ) % ( not multiple of 100
         or                                     %  or
         (Y == Q2*400 + R2, R2 == 0 )   )       %   multiple of 400 )
   }.

% mixing prolog with constraints
leap1(Y) :-
   Y::integer(1583,_),
   multiple_of(Y,4),                            % multiple of four and
   ( not_multiple_of(Y,100)                     % ( not multiple of 10
   ; multiple_of(Y,400)                         %  or
   ).                                           %   multiple of 400 )

% Y not multiple of X
not_multiple_of(Y,X) :-
   [Y,Q,R]::integer(0,_),
   { Y == Q*X + R, R >  0, R < X }. %  not multiple of X

% Y multiple of X
multiple_of(Y,X) :-
   [Y,Q,R]::integer(0,_),
   { Y == Q*X + R, R == 0 }. %  multiple of X

The problem is that 2100 is not a leap year, but the constraints-only version succeeds when it should fail:

23 ?- leap(2100).    % this is the wrong result
true.

24 ?- leap1(2100).  % this works fine
false.

I think it boils down to the following problem:


19 ?- { B or B }.  % should this not bind B to 1? 
B::boolean.

20 ?- { B xor B}.  % should this not fail?
B::boolean.

So I figured the constraint propagation was not strong enough, so I tried 'solve(…)` and the following gets the results:

21 ?- { B xor B}, solve(B).
false.

22 ?- { B or B }, solve(B).
B = 1.

The fact that {B xor B} does not fail is the most troubling case, because it succeeds with B unbound.

The problem is that using solve(...) in the case of the leap year doesn’t work. I am already providing a ground year. I can see it happens because the internal variables (Q1,Q2, etc) in leap/1 are not bound, but calling solve on them makes the predicate unusable.

Any ideas?

Some observations from my experience:

  • clpBNR can accept a constraint when there is no actual solution, so bear that in mind. Must use solve or enumerate, to be sure that a solution exists.
  • clpBNR nibbles at edges of ranges, rather than being efficient at excluding specific integers like clpfd does. (clpBNR can still be faster than clpfd with integers, depending on the problem.)
  • solve/1 or enumerate/1 can produce wrong results if an insufficient number of clpBNR parameters are provided (e.g. Booleans).

This works better:

?- B::boolean, {B+B==2}.
B = 1.
1 Like

This is an example of the classic dependency problem with intervals. The interval arithmetic system is treating each B as an independent variable so no narrowing occurs when integer(0,1) or integer(0,1) is evaluated. In other words {B or B} is no different from {B or C}. The net effect is that it acts as a constraint on B but doesn’t automatically force any narrowing. xor is no different. On the other hand:

?- {B and B}.
B = 1.

?- {B and C}.
B = C, C = 1.

does result in narrowing because both operands must be 1.

Labelling predicates like solve and enumerate systematically apply additional constraints to force narrow the intervals subject to existing constraints. Or some kind of meta-level analysis is required to transform {B or B} to B = 1 and {B xor B} to fail. Such meta-analysis is done for some arithmetic operations:

?- {X+X==4}.  % transformed to {2*X==4}, i.e., no dependency issue
X = 2.

?- {X+Y==4, X==Y}.
X = Y,
Y::real(-1.0Inf, 1.0Inf).

?- {X+Y==4, X==Y}, solve(X).
X = Y,
Y:: 2.00000000... .

Perhaps I should think about extending this sort of thing to boolean operators.

Not saying that this is a great situation but it seems to be a fundamental problem when doing arithmetic on sets of numbers.

It’s a little more complicated. To your points:

  • successful application of a constraint means only that you can’t disprove a solution exists subject to any existing constraints. This is true even when solve or enumerate are used. Success implies that if a solution exists, it lies within any variable ranges remaining, not that there is necessarily a solution within those ranges. Furthermore internal throttling mechanisms may terminate convergence before a failure can occur, a price that must be paid to avoid the appearance of non-termination. Only if all variables are unified with a number, which will never happen with real variables due to outward rounding, can a positive solution be said to exist.

  • Correct, clpBNR domains are continuous (as required for real domains). For many integer problems, systems that support domains with “holes” may well be more efficient.

  • Subject to the first point, “wrong results” should not be generated, i.e., solutions should not exist outside the set of interval values. Insufficient narrowing may not be useful but it’s not wrong. However, I’m not discounting bugs, so I’d like to know about anything considered to be “wrong”.

Thanks for the explanation, I can see why it happens now. Essentially the interval arithmetic does not take into account to what variable that interval belongs to, at least for booleans.

I think this would be very useful from a usability point of view. Once I heard a very experienced software developer say that a good API was the one that made it difficult for the user to make mistakes. Another way I heard it said was that there should be “no surprises”, so having a consistent behavior for boolean and arithmetic operators would be great (from the user’s point of view).

gotcha(B, X) :-
    X::integer(-1, 2),
    B::boolean,

    {B==0 -> X==0},
    {B==1 -> X==1}.

This works:

?- gotcha(B, X), solve([B, X]).
B = X, X = 0 ;
B = X, X = 1.

This is a gotcha:

?- gotcha(B, X), solve(X).
X = -1,
B::boolean ;
X = 0,
B::boolean ;
X = 1,
B::boolean ;
X = 2,
B::boolean.
1 Like

Not sure what were you expecting. Solving for X by itself just enumerates the possible values of X, Narrowing X has no effect on B.

However narrowing B does affect X:

?- X::integer(-1,2), B::boolean, {B==0 -> X==0}, {B==1 -> X==1}, solve(B).
X = B, B = 0 ;
X = B, B = 1.

I think the idea is that a regular clp user does not expect some constraints to be ‘ignored’ (in the user’s perception). In the user perception (who is not thinking that solve(X) does not try to narrow B), it is the same as if he had never written {B==0 -> X==0}, {B==1 -> X==1}, so the user thinks “this is a bug, the system is completely ignoring my constraints”.

Of course looking at it from the implementation point of view you can clearly see that solve(X) does not narrow B so it seems okay for the one looking at it from the implementation point of view. But I think, when designing libraries it is better to design the user-facing API as if we were a user, thinking the way the user thinks, and for a moment forget the implementation view of things.

This has nothing to do with implementation; it’s about the meaning of boolean implication (->). {B==0 -> X==0} means that a value of 0 for B implies X has value 0. Conversely, the value of X has no effect on the value of B.

?- X::integer(-1,2), B::boolean, {(B==0 -> X==0)}, {(B==1 -> X==1)}, solve(B).
X = B, B = 0 ;
X = B, B = 1.

?- X::integer(-1,2), B::boolean, {(B==0 -> X==0)}, {(B==1 -> X==1)}, solve(X).
X = -1,
B::boolean ;
X = 0,
B::boolean ;
X = 1,
B::boolean ;
X = 2,
B::boolean.

No constraints are ignored here. If that’s the user’s perception, he doesn’t understand ->.

The point is that it would be nice if clpBNR realized that X’s value depends on B’s value, so automatically calculated B when asked to calculate X - and therefore excluded the impossibilities of X being -1 or 2.

What is {B==0 -> X==0}, if not a constraint?

The constraint is also B::boolean, which implies that B can only be 0 or 1, and because we have

B::boolean,
{(B==0 -> X==0)}, {(B==1 -> X==1)}

the user perception can rightly be that X can only be 0 or 1. It is an implementation decision not to narrow B, but the user perception is nonetheless valid although not required. what do you think?

The point is that X’s value does depend on B’s value, but B’s value does not depend on X; that’s the semantics of ->. So calculating X does not affect B. X can be -1 or 2 as long as B is not 0 or 1. Nothing in CLP (or Prolog) semantics insists that “answers” must be ground.

It is a constraint but maybe we have different ideas of what a constraint is. I think it is a boolean equation that must be satisfied whenever any variables used in that equation change. The job of the constraint network is to ensure all relevant constraints can still be satisfied whenever a variable is modified (narrowed or bound to a ground value, in this case, a number). And that’s all it does.

or not yet defined, i.e., B::boolean, because it is just a constraint on the value of B.

Only given that B is 0 or 1 and not B::boolean. You’ve provided a meta-interpretation of the constraints, and the constraint system doesn’t deal in meta-interpretations. A meta analysis might also conclude that your program fragment could be re-written as B::Boolean, {X==B} but I would claim that’s not something a user could justifiably claim the constraint system should automatically deduce.

Taking it one more step, what would you expect from an equivalent program:

[B,C]::boolean,
{(B==0 -> X==0)}, {(C==1 -> X==1)}, {B==C}

If the same, how many more levels of indirection would you expect to work? At some point this becomes a reasoning system over a system of equations rather than a constraint system. This might be a reasonable thing to do but it’s not what’s normally defined as CLP AFAIK.

My point is that for a regular user ‘not yet defined’ means:

  1. B is a variable (doesn’t have a value)
  2. B can only have the values 0 or 1
  3. any solution I get will honor the fact that B can only be 0 or 1

It seems that what you mean by ‘not yet defined’ means:

  1. B is a variable (doesn’t have a value)
  2. B can only be bound to the values 0 or 1
  3. If B is not bound (or narrowed) I don’t consider it for any computation, so I can give the user solutions that could be incompatible with 2

Is this correct?

Not quite. Even if B is not bound, it will be considered for any future computation. But “incompatible with 2” can only be determined when B is bound.

Think of a set of constraints as a deferred test on future values of the variables. That test will be executed when the variables are bound to values or when the domains of the variables change. If nothing changes you’re just left with the original variables subject to the constraints. In CLP terminology these are residual goals (SWI-Prolog -- Coroutining):

Remaining constraints … are called residual goals. They are said to flounder , because their truth is not yet decided. Declaratively, the query is only true if all residual goals are satisfiable.

If I set the clpBNR_verbose flag to true these goals (using the primitive operations) become visible:

?- X::integer(-1,2),B::boolean,{(B==0 -> X==0)}, {(B==1 -> X==1)}.
X::integer(-1, 2),
B::boolean,
(_A::boolean, {_A==(B==0), (_A->_B)}),
(_B::boolean, {_B==(X==0)}),
(_C::boolean, {_C==(B==1), (_C->_D)}),
(_D::boolean, {_D==(X==1)}).

This succeeds and will be considered for any “computation”, but so far there isn’t any; just a set of “residual goals”. Such a computation might set B to 0 or set X to -1:

?- X::integer(-1,2),B::boolean,{(B==0 -> X==0)}, {(B==1 -> X==1)}, B=0.
X = B, B = 0.

?- X::integer(-1,2),B::boolean,{(B==0 -> X==0)}, {(B==1 -> X==1)}, X= -1.
X = -1,
B::boolean.

These both succeed because there’s nothing in the individual constraints that will cause failure. But the combination will fail:

?- X::integer(-1,2),B::boolean,{(B==0 -> X==0)}, {(B==1 -> X==1)}, B=0, X= -1.
false.

This is all the constraint system is capable of doing, testing that any domain changes are “compatible” with the defined constraints. You might find it unsatisfactory because you can see that X must be in the range 0 to 1 based on B::boolean but that’s a meta-interpretation, and outside the capabilities of the constraint system.

In general, success means that if there is a solution, it lies within set of variable domains but not that there necessarily is a solution. No constrained variables (i.e., no residual goals) is a proven solution. On the other hand, failure does mean there is no solution.

The problem with this is that sometimes the user cannot trigger the future computation because the variable is not accessible at that moment. He can’t call solve/1 on it.

Consider the initial case I posted for the leap year:

23 ?- leap(2100).    % this is the wrong result
true.

2100 is not a leap year, this succeeds because some variables have not been bound, but those variables are not accessible to the user calling leap/1. Changing the implementation of leap/1 to call solve([Q0,Q1,Q2,R0,R1,R2]) makes the system unusable as it enters into a long computation.

How could this be solved?

1 Like

Then change it into leap/2, adding a list of clpBNR instances as the 2nd argument. This is of course inelegant, but would work.

I can’t see any actual advantage in using clpBNR over is for calculating leap years (especially since it’s missing divmod or if..then..else equivalents… I think this is the least painful way to do it with clpBNR:

leap_year(Y) :-
    Y::integer(1583,_),
    M4::integer(2,_),

    {Y==M4*4},
    solve(M4),
    (leap_year_100_(Y) -> leap_year_400_(Y) ; true).

leap_year_100_(Y) :-
    M100::integer(2,_),
    {Y==M100*100}.

leap_year_400_(Y) :-
    M400::integer(2,_),
    {Y==M400*400}.
1 Like

Is this a current bug?

% *** clpBNR v0.9.13alpha ***.
Welcome to SWI-Prolog (threaded, 64 bits, version 8.4.3)
?- {1 xor 1}.
ERROR: Stack limit (1.0Gb) exceeded

Yes, back to the original problem. Boolean “arithmetic” seems to be OK but the constraint propagation is weak on converting failures to booleans for reified constraints as your example demonstrates.

I’ll have to put this on my list for further investigation.

This example is more about testing than calculating but, for a practical solution, I agree . It’s probably better to write a simple test with a freeze if you want to make it logical:

leapF(Y) :-
	freeze(Y,(integer(Y), 
	          Y >= 1583, 
	          Y mod 4 =:= 0, 
	          (Y mod 100 =:= 0
	           -> (Y mod 400 =:= 0 ) 
	           ;  true)
	          )
	       ).

so:

?- leapF(2000).
true.

?- leapF(2001).
false.

?- leapF(2004).
true.

?- leapF(2100).
false.

Sure is, thanks.

Great! Look forward to any update.

As I anticipated, the problem is that failure in a reified constraint doesn’t get converted to a boolean value (0 or false). This turns out to be a difficult problem to fix because constraint propagation works from the bottom up (changes in any interval get propagated to other intervals in the network) and any failure causes the initiating change to fail. So a general fix, if even possible, would require rethinking the whole internal mechanics and that’s nowhere on my priority list at the moment.

A workaround is to avoid subexpressions in boolean expressions which fail. Try as I might, I couldn’t come up with a reformulation of your example that met this requirement. So I introduced a new constraint function, isint(Exp) which allows me to test for even division, which is at the root of the leap year constraint.

isint is true if Exp (successfully) evaluates to a point integer value. It is false if the point value is not an integer or is an interval which doesn’t contain an integer value. Otherwise it evaluates to boolean (i.e., 0 or 1).

?- {Y==isint(X)}.
Y::boolean,
X::real(-1.0Inf, 1.0Inf).

?- {Y==isint(X)}, X=42.
Y = 1,
X = 42.

?- {Y==isint(X)}, X::real(1r4,1r2).
Y = 0,
X::real(0.25, 0.5).

?- {Y==isint(X)}, X::real(1r4,3r2).
Y::boolean,
X::real(0.25, 1.5).

leap/1 can now be implemented as:

leap2(Y) :-
   Y::integer(1583,_),
   [Q4,Q100,Q400]::real(0,_),
   { Y == Q4*4,
     Y == Q100*100,
     Y == Q400*400,
     isint(Q4),                       % Y must be multiple of 4
     (isint(Q100) -> isint(Q400))     % if Y is a multiple of 100, Y must be  a multiple of 400
   }.

Now:

?- leap2(2000).
true.

?- leap2(2001).
false.

?- leap2(2004).
true.

?- leap2(2100).
false.

While isint probably has more uses than this one, I’m not overly happy about adding new primitives piecemeal to address issues like this. However, it seems better than either the status quo or starting over. Also, as previously discussed, clpBNR probably isn’t the optimum tool for solving this problem.

Any thoughts?