But those are separate things. Slightly changing variable names:
A prob_1(?Rotations, ?Count) is “relational” over the whole domain of pairs rotations-count. Then we can e.g. call it as parse_1(RawData, Rotations), prob_1(Rotations, Count), which uses just one of the possible invocation patters.
The cost of that “relationality” is inefficiency re the specific given problem, the advantage of course is that now prob_1 works in all directions, so we can ask many more kinds of questions to it. But I am still thinking whether we can do better than that: keeping the “relationality”, as in arguments in all directions, still using something like “iterators”…
@brebs I’m sorry for high jacking your thread, but I can’t resist ^^
I believe this is equivalent to my foldl(rotate, Pairs, 50-0, _-Count), where Pairs = Rotations.
This can be run in all direction. And with additional labeling, you can enumerate every possible rotations for a given start and end state-count pair.
And I would add, a specific problem in a specific mode.
I often view a relation as having a kind of “performance” budget that you can spend over one specific mode only (functional) or spread on multiple mode (logic).
In the context of logic programming, I associate iterators to laziness/constraints.
If you see logic program as producing a stream of solutions, we can avoid thrashing using laziness.
In imperative programming, iterators are used to model a list of thing without actually instantiating the full list.
In logic programming, laziness/constraints allows us to model a list/group/family of solutions without actually iterating through them.
AFAICT, iterators/generators are essentially a functional notion, indeed coming from the functional languages, together with, more generally, higher-order functions (functions that take or return functions, aka functions as first-class): in imperative programming we just do loops…
Ah, OK: indeed I had missed that. Let me work more on it.
Meanwhile, thank you and everybody for the feedback, appreciated.
So basically, tuples_in to label pairs of [Value-Index] and enforce a chain on the list of selected indices.
It’s not as fast as I would like, 14 seconds for part 2. So if anyone has a good suggestion with better constraints or labeling strategy, please comment
One issue of CLP(FD) a la Markus Triska and mutual equations
is the presence of domain reduction. It can happen that the solver
makes a very small domain reduction of a variable X, which then
triggers a very small domain reduction of variable Y, which then
triggers again variable X, again variable Y and so on. If the initial
domain was large this iterative algorithm can take quite some
CPU cycles, slowing down the solver. Easiest example:
:- use_module(library(clpfd)).
test(N) :-
L = [X,Y],
L ins 0..N,
X #> Y,
X #< Y.
Results from SWI Tinker:
?- time(test(1000)).
% 46,056 inferences, 0.007 CPU in 0.007 seconds
(100% CPU, 6579233 Lips)
false.
?- time(test(10000)).
% 455,956 inferences, 0.235 CPU in 0.235 seconds
(100% CPU, 1940237 Lips)
false.
clpfd and clpBNR are based on general purpose constraint propagation that is usually too weak to produce the desired results. So, in addition, each provides a search mechanism (labeling or solve), again general purpose but rather brute force, using additional constraints to divide the problem recursively into sub-problems that local constraint propagation can solve. (Searching is also useful when more than one solution exists.)
This is generally effective but really doesn’t take advantage of any specific knowledge of the problem. For example, it can find the roots of any degree polynomial, but isn’t the most efficient way of finding the roots of a quadratic, where a known formula can be directly applied.
Linear systems are analogous; searching mechanisms can be used but there are well known techniques for solving these systems that are much more efficient than brute force search. For example, the CLP(BNR) Users Guide describes how to use Cramer’s Rule and arrays to solve linear systems of equations more efficiently. And the clpBNR_toolkit module provides predicates to find optima for linear systems which are just wrappers around library(simplex) calls. These are better alternatives to searching when the problem specification allows.
clpqr takes a different approach by building support for linear constraints into the constraint mechanism. But this doesn’t handle non-linear constraints other than by deferring until other constraints can be applied that may reduce them to linear constraints.
I think it’s possible - list all the possible orders of the digits, then use min/max or bb_inf/3 to find the best i.e. required solution. But I doubt it would be performant.
Instead, this is a more relational solution (than my original Day 3): Day 3 using clpq. Is mainly list processing.
Sorry, maybe I am still missing something but that is not it: foldl is not bidirectional, nor phrase is by the way: neither streaming nor bidirectional…
Indeed, I guess that is what I am looking for, a way to approach these problems that is both streaming and bidirectional: anyway, don’t mind too much, this is more of a side-quest of mine.
Strange, works on my side. Most predicates that are pure
are automatically bidirectional if they terminate. foldl/4 is pure ,
it does not use cut or if-then-else in its implementation:
But it is higher order, i.e. takes a closure argument. So
to have bidirectionality, the supplied closure needs to be
pure as well. One benefit of CLP(FD), besides that it sometimes
stumbles over mutual equations, is its ability to turn
arithmetic into something pure to some extend via delayed
execution. Take this definition and combine it with foldl/4:
:- use_module(library(clpfd)).
add(X,Y,Z) :- Z #= X+Y.
Now SWI-Tinker confirms a bidirectional accumulator sumlist:
?- foldl(add, [1,2,3], 0, X).
X = 6.
?- foldl(add, [1,2,3], X, 6).
X = 0.
This explains a little bit the popularity of CLP(FD) a la Markus
Triska, its greedy forward checking and/or domain reduction,
without even calling label/1 or labeling/2.
Most of the higher order predicates have some flexibility
in the modes they accept, when they are pure. That the
foldl/4 mode is too strictly documented was noted by
Another use case where foldl/4 would be used with
different mode than in the SWI documentation, are
difference lists. foldl/4 itself can be defined via DCG:
When you use it to generate text, the mode will be foldl(:Goal, +List, -V0, +V) , and not foldl(:Goal, +List, +V0, -V).
Here is a DCG defined closure used with foldl/4:
twice(X) --> [X, X].
?- foldl(twice, [a,b,c], X, []).
X = [a, a, b, b, c, c].
The above test case works in SWI Tinker with the default
foldl/4. So foldl/4 has indead some streaming genes, if we
are so free to associate DCG generated lists with streaming.