Conditional doesn't backtrack for solution

I’m using: SWI-Prolog version 9.0.4

I want the code to: tell me solutions for the following, not give false:

?- use_module(library(clpfd)).
?- include(#>(3), [K], []).
false.

I can get true by substituting 4 for K:

?- include(#>(3), [4], []).
true.

I believe it is because include uses ->/; which commits to the premise?

Can someone explain this more?

What solutions do you expect to get?

I’d expect to get the solution:

K in 3..inf

Similar to how:

?- maplist(#>(3), [K]).
K in inf..2.

You have probably already “rolled out” both the maplist and include at least mentally, so the maplist invocation as you show it:

?- maplist(#>(3), [K]).

is equivalent to:

?- 3 #> K.

and the include

?- include(#>(3), [K], []).

Naming the third argument Result, is equivalent to:

?- (   3 #> K
   ->  Result = [K]
   ;   Result = []
   ),
   Result = [].

Or did I make a mistake somewhere?

2 Likes

It strikes me as a CLPFD thing, since this returns an empty list:

findall(X, (X in 1..10, include(#>(3),[X],[]), label([X])), Xs).

and this does not:

findall(X, (between(1,10,X), include(#>(3),[X],[]), label([X])), Xs).

Also:

?- exclude(#>(3), [X], []).
X in inf..2

Meanwhile changing exclude to include results in false. Seems a clear cut case of unexpected semantics.

All is procedurally as one would expect. include/3 and exclude/3 are not “pure” predicates and cannot be (meaningfully) combined with CLP … Modern Prolog systems, for better or for worse, should be considered a set of sub-languages. The most important being CLP, Tabling and classical Prolog. You cannot freely mix these. Depending on the system some combinations may be (partially) supported.

The best we can probably do is to add a “pure” property to predicates and warn against invalid combinations. That is quite a bit of work though and don’t know whether it would be able to fully cope with this problem.

A relevant document you may want to read.
Here is the actual pack exposing the basic implementation.

edit:

After pack_install(reif), and manually cloned GitHub - mmisamore/reif_utils, I got these result

?- tfilter(#>(3), [X], Fs).
Fs = [X],
X in inf..2,
X+1#=_A,
_A in inf..3 .

?- tpartition(#>(3), [X], Ts,Fs).
Ts = [X],
Fs = [],
X in inf..2,
X+1#=_A,
_A in inf..3 ;
Ts = [],
Fs = [X],
X in 3..sup,
X+1#=_A,
_A in 4..sup.

reif_utils is required because every comparison operator must be ternary (truth reification).
Sorry I don’t know why pack_install(reif_utils) fails…

I think not knowing where the overlaps are with the different sub-systems is a bit disenchanting aesthetically, especially if the non-overlaps are significant. One might be caused by it to trust abstractions only in so far as one can understand them, which would be to miss most of the benefit of abstractions especially in high level programming languages.

In the include/exclude example, given the output from exclude, I should think that most non-experts would expect – as I did – something other than false. That it is procedurally correct for the Prolog sub-system, and procedurally correct for the clpfd sub-system, just incorrect/unexpected when used together I presume is a problem no-one wants to have :slight_smile: I think this is especially true given the general promotion of clpfd as a default. The guide suggests that it should be used instead of is in most cases, because it is purer since it is “relational”, and so on.

I guess this might be why some clpfd proponents are also very strong “pure” prolog proponents, since it would only be in that case that the two could be combined seamlessly, so far as I’ve understood.

This query will generate 1 solution so the 3rd argument will not unify with [].

If you want solutions:

?- include(#>(3),[K],S).
S = [K],
K in inf..2.

?- include(#>(3),[K,4,L],S).
S = [K, L],
K in inf..2,
L in inf..2.

Remember that applying a constraint will succeed (producing a solution) unless it’s provably false, e.g., 3 #> 4 will immediately fail, but 3 #> K will succeed with a constraint applied to K.

In this case the one solution is excluded so there are no solutions (3rd argument=[]) and the query succeeds. And the constraint is applied.

All looks like intended semantics to me.

I don’t understand this statement. If include and exclude are impure (??) they can be combined with CLP as shown by this example, no?

I think they’re strong pure prolog proponents because CLP restores the logical purity of (in this case) arithmetic, just as you stated; nothing more than that.

The “semantics” if include would be something like “give me all elements of the list for which the constraints cannot immediately be proven to be false”. If you accept that, fine. It is a bit fragile though as it depends on the propagation that is performed by the constraint system, i.e., a constraint may be logically false, but can still be accepted as possibly true by the system.

Pure code also allows swapping subgoals in a clause body without semantic consequences. That is not going to work here.

I agree with

I also agree is this not very satisfactory :frowning: I don’t see a decent solution though. Extending the “pure” code extends the usefulness of clp. But, in real applications there is typically also a lot of code that is hard to implement within the “pure” domain. I have used the term “logical islands” a couple of time: clp or tabling components operate in such islands while classical (impure) Prolog code provides the “glue”. It would be nice if we could detect improper use, but this is far from trivial. XSB tabling detects cuts while filling a table (SWI-Prolog does not yet). For constraints the rule is that no new constraints may be introduced and not resolved between creating a choicepoint and committing to it. Possibly something based on call_residue_vars/2 could do that. It would be really hard to get acceptable performance from such an approach though.

2 Likes

Its here that I am referring to @Boris:

Almost all Prolog programs also reason about integers. Therefore, it is highly advisable that you make CLP(FD) constraints available in all your programs. One way to do this is to put the following directive in your /init.pl initialisation file:

:- use_module(library(clpfd)).

All example programs that appear in the CLP(FD) documentation assume that you have done this.

When teaching Prolog, CLP(FD) constraints should be introduced before explaining low-level arithmetic predicates and their procedural idiosyncrasies. This is because constraints are easy to explain, understand and use due to their purely relational nature. In contrast, the modedness and directionality of low-level arithmetic primitives are impure limitations that are better deferred to more advanced lectures.

https://www.swi-prolog.org/pldoc/doc/_SWI_/library/clp/clpfd.pl

I read that as saying more or less, you should use CLPFD for all arithmetic, all the time, especially if you’re a noob :slight_smile:

The semantics of include are what they are: give me all elements of the list for which the Goal Succeeds. In this case, the Goal, as defined, succeeds whenever the constraints can be successfully applied. It’s not fragile at all - in some cases, the constraint is immediately inconsistent; in others, it may not happen until later, or not at all (solution with or without residual goals). I don’t see anything mysterious going on here.

I.e., if subgoals are pure. If Goal is pure, isn’t include pure? (I’m pretty sure constraint application is pure - that’s part of the rationale.) Do you have a specific example?

Disagree, unless I’m mis-interpreting this statement. And what do you mean by “resolved”? Constraints can be incrementally applied at any point - backtracking will remove them, just like any unification gets undone. Choicepoints and commits have no special semantics with regard to constraints (unless we’re using different defintions of constraint).

Would this do the “right thing” if the -> were changed to *->? Or would that break other things?

While I appreciate the sentiment, this is a bridge too far (for me). The gap between compiled functional arithmetic (with all it’s “idiosyncracies”) and constraint arithmetic is just too large, not to mention that CLPFD just covers the integer domain. I do think it’s very useful to understand that there are two approaches to arithmetic and you should be using the best tool for the job at hand.

If there ever comes a time for a next generation logic programming language, this is one of many issues (along any predefined predicate with argument modes, for example) that should be re-examined. (Just my opinion.)

1 Like

Yes, this is what this says, just not sure about it. Noobs come in all shapes and forms.

I don’t know. Depends what “the right thing” is :smiley:

Perhaps I spoke too soon. CLPFD does a heap of load time processing (goal expansion) which appears to significantly close the gap with standard functional arithmetic on ground expressions. Just counting inferences:

?- time((X=4,X=<5)).
% 0 inferences, 0.000 CPU in 0.000 seconds (55% CPU, 0 Lips)
X = 4.

?- time((X=4,X#=<5)).
% 1 inferences, 0.000 CPU in 0.000 seconds (65% CPU, 55556 Lips)
X = 4.

Of course the OP’s query didn’t have ground expressions, so not the issue:

?- include(<(3),[K],G).
ERROR: Arguments are not sufficiently instantiated

?- include(#<(3),[K],G).
G = [K],
K in 4..sup.

I disagree - use it when there’s a good reason - it:

  • Adds complexity (attributed variables)
  • Is highly problem-dependent on whether performance increases, or (more likely) decreases
  • Makes debugging harder
3 Likes

I guess you need CLP when you need to solve combinatorial problems that involve arithmetic or that can be mapped to an arithmetic problem. That is an important domain for Prolog. Still, applications typically do a lot more where clp is not very handy. One of the powers of Prolog is IMO combining logic and algorithmic programs. As we have seen, the boundaries between these are a bit hard to manage :frowning: