Hello, I wanted to share a short article describing a neat technique I discovered some time ago, where explicitly encoding some invariants in your code as constraints allows you to inject control into predicates and get rich behaviour out of simple code.
While I agree with the general message, I’m a bit puzzled by the complexity of the examples you chose.
For instance, elem_at
is not needed at all, nth0
just “works”:
> swipl
Welcome to SWI-Prolog (threaded, 64 bits, version 9.2.7)
SWI-Prolog comes with ABSOLUTELY NO WARRANTY. This is free software.
Please run ?- license. for legal details.
For online help and background, visit https://www.swi-prolog.org
For built-in help, use ?- help(Topic). or ?- apropos(Word).
?- use_module(library(clpfd)).
true.
?- 3 #=< I, I #=< 5, nth0(I, [a, b, c, d, e, f, g], E).
I = 3,
E = d ;
I = 4,
E = e ;
I = 5,
E = f ;
false.
?-
So, I’m all in for using constraints as much as we can, but there’s no need to rewrite everything for that
I start with the list example because it’s the simplest one to demonstrate the principle, even if it is not necessarily practical.
nth0
does “work” but it’s not about correctness - if you actually trace the execution, you’ll see it always traverses the whole list. In contrast, elem_at
only traverses the prefix it needs based on the constraint. This means the two predicates have different asymptotic complexity when constraints are present; and if the prefix in question is much shorter than the whole list, this can mean a considerable performance improvement. But it is admittedly a toy example.
Similarly, the binary search tree case achieves logarithmic complexity instead of linear at the expense of a bigger constant.
My whole point was that using nth0
was still linear and not quadratic… hence I feel that the example is not very “exciting”. Still always good to have articles promoting constraints.