Bidirectional predicates in the stdlib?

Hey just a quick thought: I was wondering if there were still any ongoing efforts to make predicates in the standard library to be more bidirectional wherever possible?

For example list_to_set/2 list_to_set(X,[1,2,3]). Don’t we know what would need to be true of X to make this relation true? Imo it would be cool if X backtracked through answers like

[1,1,2,3] ;
[1,1,2,2,3] ;
[1,1,2,2,3,3] ;
[1,1,1,2,3] ;
...
  • Also, one quick sidebar on list_to_set/2 : I’m seeing list_to_set([1,1,2,2,3,3],X). only produces one answer X = [1, 2, 3]. making list_to_set([1,1,2,2,3,3],[3,2,1]). false?
    – I believe sets are order-agnostic, aren’t they? I understand we can run a result through permutation/2 but wouldn’t it be more correct for list_to_set to produce those on its own for the sake of logical/mathematical precision?

Another example is between(1,X,3).. This currently gives an args not sufficiently instantiated error but we know what X would need to be here, no? I would think we should backtrack through 3 ; 4 ; 5 ; 6 ; ...?

No specific use case for these at this time but it’s one of those things where you don’t need it until you do. Is there a reason why we don’t do this? Any technical limitations? Thanks.

In a nutshell - combinatorial explosion.

There could be limited use-cases of course, where the number of combinations can be kept under control by additional information which is known or can be inferred.

Or the computation could be delayed, using wait/2 or freeze/2. But it will be a bit slower for the common case, where the predicate’s argument is instantiated. (I used this technique to simplify the code in library(protobufs)).

1 Like

But in principle all predicates could check if the arguments are instantiated enough to do the intended computation, and if they are not, they could post a constraint for the predicate to be solved later, when more information is available. That way we wouldn’t have predicates that fail because arguments are not instantiated enough (they would be just constraints) and still preserve the speed in the happy path. The transformation of the predicate could also just be a compiler pass. What am I missing?

1 Like

If it takes a billion years to cycle through the possibilities, or quickly runs out of RAM, then what is the point?

Wait, but I’m not suggesting to cycle through the possibilities. I’m suggesting you could freeze the computations that give rise to combinatorial explosions, and later you can take another look at them to see if something changed (you could still keep them frozen if not enough work has been done). But maybe you have an example in mind that would show the RAM consumption of this approach?