Solve this declaratively

Let In = [1,2,-1,4,5,-1,-1,8,9,10] and Replace = [3,6,7,8,9,10], write merge(In, Replace, Out) such that Out = [1,2,3,4,5,6,7,8,9,10] where the predicate replaces -1 elements in turn from the Replace stack.

This is easy to imagine as a recursion of the form merge([X|T1], [Y|T2], Acc, Out), but how would you write this down declaratively without resorting to recursion? I.e. how would you specify the constraints for this solution and let Prolog solve it, rather than expressing the algorithm directly?

In some circles “recursion” used to be considered declarative. For example the textbook definition of append/3 is declarative, for some definition of “declarative”.

What are your constraints on the meaning of the word?

The meaning of the word doesn’t matter too much right now, just dont use recursion :slight_smile:

OK, so you mean that “declarative” means without recursion. Good. How about using a predicate that is defined recursively?

Use whatever predicates come with SWI. Its all written in C after-all :). The facility I’m talking about is for declarative programming for the end user of SWI Prolog.

Here is one procedure which does not involve recursion which perhaps could be made declarative.

Attach indexes to each array entry as pairs. So we get:

Pairs = [1-1,2-2,3-(-1),4-4,5-5,6-(-1),7-(-1),8-8,9-9,10-10]

Then extract the indexes of the -1 entries:

ToReplace = [3,6,7]

Then align those with the replace list to get:

Replacement = [3-3,6-6,7-7].

Then filter out the -1 entries from Pairs to get:

FilteredPairs = [1-1,2-2,4-4,5-5,8-8,9-9,10-10]

then use append(FilteredPairs, Replacement, FinalPairs), and then keysort/2 and pairs_values/2 for the final list.

I think the above works. Declarations could be something like this:

  1. Replacement values should have the same indexes as the items to be replaced.

  2. The final output should be the combined indexed input and replacement list. It should be ordered by index, and it should not contain -1 values.

Just an aside, but I note that common Prolog predicates are often named with procedural intent. For example, append, aggregate, maplist, foldl, print, reverse, and so on. I suspect that this is a manifestation of the tension between constraining solution spaces and describing recipes inherent in Prolog.

From a pragmatic point of view:

Prolog programs have well-defined procedural behavior: you know the order of solutions, because you know the order of execution. This is in contrast with SQL, for example, which is a bit more declarative than a Prolog without a constraint solver: you don’t know the order in which predicates in the WHERE are evaluated, you don’t know the order of results in the result set unless you explicitly specify it.

The obvious declarative approach to Prolog without constraint solver or tabling is generate-and-test, also known as brute-force. You have to constrain your solution spaces yourself, usually by judicious ordering of sub-goals and cuts. How declarative is that?

The “tension” that is manifested is mostly about expectations. Prolog’s declarative nature is severely overstated and this has certainly done damage to the adoption of the language.

Just some thoughts.

2 Likes

You forgot tabling. That provides a superset of Datalog semantics with far lower risk to get into exponential complexity and guaranteed termination.

Otherwise I generally agree. Prolog programs (no using constraints or tabling) define the problem in a declarative way, but may or may not be executable. A subset thereof is executable, with varying complexity. Typically you can write the program such that it becomes efficient in terms of complexity, but often it gets harder to read as a definition. For example, the nicest definition of sort probably goes like this :slight_smile:

sort(List, Set) :- permutation(List, Set), ordered(Set).

Writing quicksort or mergesort gets normal N*log(N) complexity, but at the cost of being much harder to recognise as a definition of sort/2.

1 Like

That’s a challenge that I believe is independent from the idea of declarative programming.
Consider writing a predicate for a “person” type:
person(Age, Citizenship)
and we want to determine if they are eligible for joining the Canadian Military. For that to be true, we must declare something about Age AND about Citizenship.
It would be impossible otherwise.

Now a Prolog list (of course) also has two parts, a head and a tail. So if we want to declare something about this list, we must be allowed to declare something about the head and something about the tail. It just so happens that because the data is recursively defined, a declaration of knowledge about list data will be recursive OR will rely on a predicate that could be defined recursively.

I totally agree. For example, instead of maplist, we could have relation_lists/2 that relates a relation across elements of lists. max_list actually relates a list to its max (should be list_max), etc.

Just noting in passing that this kind of problem is declaratively solved by an abstraction that the functional programming community calls lenses (like the lens library in haskell, or the specter library in clojure). I expect that would be the idiomatic declarative solution also in prolog.

The original post seems to have been trying to avoid those procedural/recursive approaches, it seems? Hard to tell.

1 Like

I suppose being declarative in this context should imply the factoring of a programme into specification and execution, wherein the specification focuses on constraints for the solution rather than the details of the execution. Examples would be Z3, (Mini)Zinc, AMPL, and so on.

To build on @jan’s example, I don’t think vanilla Prolog fits the above definition when a programme is mostly concerned with constraining SLD resolution (i.e. pruning search) rather than constraining valid answers, because we are still spending most our time thinking about execution.

I think @jan’s sort programme above is a valid declarative definition, even though its useless. The problem is that SLD is not much of a solver, and very quickly we are forced into constraining search and thinking about execution. Really I’d want to be able to write down: sort(X,Y), Y = ordered(X), and think no further. How Z3 enables and approaches this is very insightful.

CLP, CHR and other systems make the solver part of the Prolog system more capable, and with them the amount of code that can be written fitting my definition above increases.

It is also how books like “Simply logic” and popular online tutorials put Prolog forward. In Prolog, we “reason about” rather than code. Popular examples like tube station connectivity problems, Sudoku solving, 8-Queens, and so on, are all about writing down the spec and not thinking about the execution.

Perhaps Prolog is intending to be a declarative language, just falling short. But if Prolog is not intending to be that, then what is it intending to be exactly?

History tells us specifically that Prolog was born from the realization that a Horn clause can simultaneously describe a logical formula and a sequential computation. This is an extremely powerful idea that doesn’t exist so naturally in any other programming language.

The benefits of this can be had for any pure monotonic parts of Prolog code.
For the sake of present-day practicality, some Prolog clauses are not pure monotonic. Are there new styles and idioms of writing Prolog programs, along with new libraries, that would allow practical, performant coding while staying in the pure monotonic subset of Prolog? Maybe … Probably …

We should have maybe started with some code. Here is the unwanted recursive solution. This is one of many ways to do it:

/** merge(+In, +Stack, -Out)

Replace -1 in In with elements in turn from Stack
*/
merge([], _, []).
merge([I|In], Stack, [O|Out]) :-
    (   I == -1
    ->  Stack = [O|S0],
        merge(In, S0, Out)
    ;   I = O,
        merge(In, Stack, Out)
    ).

How about:

merge(In, Repl, Out) :-
    maplist(eq_minus_1, In, Out),
    include(var, Out, Incl),
    append([Incl, _], Repl).
    
eq_minus_1(-1, _).
eq_minus_1(X, X) :-
    dif(X, -1).

Result:

?- merge([1,2,-1,4,5,-1,-1,8,9,10], [3,6,7,8,9,10], Out).
Out = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ;
false.

However - the code for maplist/3, include/3 and append/2 all includes recursion - I don’t see any point in pretending that such recursion does not exist.

This is how I would usually write it:

merge2([], _, []).
merge2([H|T], C, [E|R]) :-
    merge2_(H, C, E, CT),
    merge2(T, CT, R).
    
merge2_(-1, [H|CT], H, CT).
merge2_(H, C, H, C) :-
    dif(H, -1).

Result:

?- merge2([1,2,-1,4,5,-1,-1,8,9,10], [3,6,7,8,9,10], Out).
Out = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ;
false.

Not doing this recursively seems odd to me. After all, all the definitions that avoid recursion use library predicates that are defined recursively. Many of them are quite “procedural”, both in their naming and implementation. By far the most natural implementation is something like this:

merge([], _, []).
merge([-1|In], [O|S0], [O|Out]) :-
    merge(In, S0, Out).
merge([H|In], Stack, [H|Out]) :-
    dif(H, -1),
    merge(In, Stack, Out).

Here, we can replace dif/2 by \==/2 or use a cut in the second clause. Doing so stops the predicate from being pure. I’m not sure whether declarative and pure should be considered independent or possibly even synonyms. In any case, to me the above is declarative, expresses the relation cleanly and can be used in any mode (is pure).

In practical situations we will have information on the required modes and probably use a cut :slight_smile: Probably this means it will be called in mode +,+,-. If so, I’d be temped to write the code below these days. It expresses the relation only in mode +,+,- and demands the inputs sufficiently instantiated to the right types. All other usage will result in an exception. So, it is limited and you make this explicit.

merge([], _, Out) =>
    Out = [].
merge([-1|In], [O|S0], Out) =>
    Out = [O|Out1],
    merge(In, S0, Out1).
merge([H|In], Stack, Out) =>
    Out = [H|Out1],
    merge(In, Stack, Out1).
1 Like

Yes. Thanks. Edited, so people can copy/paste.

Thank you @jan, your response is insightful to me.

If “declarative” implies separating the specification of the result and the execution of the solution, I don’t think purity is an operative concept. However, I could see why we may want to put forward Prolog as a language for expressing symbolic relationships, defined as a parameterised enumeration of possibilities. I can see why an emphasis on “purity” would be essential to that goal and how adding CLP(FD) stretches that vision to the finite integer domain. Meanwhile, the constraining of these enumerations makes code useful for this or that purpose.