The limits of the declarative approach

I’m putting together a quick introduction to Prolog (and declarative approaches in general), both as an online resource and as part of a class for Seniors I’m teaching.

In my gut, I feel that moving away from an imperative style is the way we will continue to grow programming as the domains and environments get more complex.

At the same time, the purely declarative solutions often seem to lack a grounding in reality, heading off into the weeds where a more imperative solution would follow the path.

For example, I searched for a while for a purely declarative Towers of Hanoi (I know, I know), and all the solutions that claimed to be so were really just rewrites of the standard, imperative algorithm.

I had an attempt at doing it by simply representing the terms of the problem. You can see it here: https://gist.github.com/pragdave/0207669683ab18b28caa8e5732ed1f67

Clearly it isn’t viable.

My question is: could it be? Or will be always need to inject imperative knowledge into our code.

Cheers

Dave Thomas
@pragdave

2 Likes

Have you seen RosettaCode Tower of Hanoi Prolog?

Yup: it’s imperative… :frowning:

Found this knight moves on a chess board answer, would that be in light of what you seek? Just trying to understand what are possible answers that are similar that you would find acceptable.

Would an optimal planner version count as declarative: declare the available moves - as in your Prolog version - and then use a planner to solve it?

I have not implemented this in Prolog, but in the Prolog inspired language Picat:
http://hakank.org/picat/towers_of_hanoi_planner.pi

It solves N=1…9 optimally (i.e. the theoretical length 2**N-1) in a few seconds.

Best,

Hakan

1 Like

I am starting to think that a declarative solution would be a Hamiltonian cycle, and that certain states on the cycle are the answer.

In other words don’t think of the disk going only from one pole to another in only one direction, but that depending if you go forward or backward on the cycle, you can reach another solution position.

I have not read this yet, but others (Jan B) may find it of value for this idea:

HAMILTONIANICITY OF THE TOWERS OF HANOI PROBLEM

Hanoi Graph

OEIS: Hanoi

The book Simply Logical: Intelligent Reasoning by Example by Peter Flach http://people.cs.bris.ac.uk/~flach/SimplyLogical.html has this example solution:

%%% Part II: Reasoning with structured knowledge %%%

:-op(900,xfx,to).

% hanoi(N,A,B,C,Moves) <- Moves is the list of moves to 
%                         move N disks from peg A to peg C, 
%                         using peg B as intermediary peg 
hanoi(0,A,B,C,[]).
hanoi(N,A,B,C,Moves):-
	N1 is N-1,
	hanoi(N1,A,C,B,Moves1),
	hanoi(N1,B,A,C,Moves2),
	append(Moves1,[A to C|Moves2],Moves).

HTH.

This does not seem to me the ideal problem for evaluating declarative techniques. The program is inherently exponential and there is only one elementary algorithm that completes it. You can write that algorithm out and Barb’s version is as concise as can be (except I believe with Prolog you can use DCG and avoid the append).

If you wanted a program to infer a solution from principles, that’s just a non-starter because of the size of the conclusion you want to be inferred. Either your inference method trivially finds the solution, or it gets overwhelmed by the size of solution it’s trying to find. With other more tractable problems you could demonstrate interesting results.

Somehow tabling makes hanoi faster. I think
it should, but there seems to be a bug somewhere:

In SWI-Prolog:

?- time(hanoi(18,a,b,c,_)).
% 2,883,647 inferences, 0.250 CPU in 0.266 seconds (94% CPU, 11534588 Lips)
true.

?- time(hanoi2(18,a,b,c,_)).
% 460,687 inferences, 1.109 CPU in 1.383 seconds (80% CPU, 415267 Lips)
true.

In my system:

?- time(hanoi(18,a,b,c,_)).
% Up 630 ms, GC 77 ms, Threads 547 ms (Current 01/03/20 00:11:11)
Yes

?- time(hanoi2(18,a,b,c,_)).
% Up 146 ms, GC 5 ms, Threads 141 ms (Current 01/03/20 00:11:13)
Yes

Whats going wrong in SWI tabling? The fifth argument of the tabled predicate will be a ground term. So it could use some structure sharing. Maybe I should raise a GitHub ticket.

BTW: This is the test code:

hanoi(0,_,_,_,[]) :- !.
hanoi(N,A,B,C,Moves):-
	N1 is N-1,
	hanoi(N1,A,C,B,Moves1),
	hanoi(N1,B,A,C,Moves2),
	append(Moves1,[A-C|Moves2],Moves).

:- table hanoi2/5.
hanoi2(0,_,_,_,[]) :- !.
hanoi2(N,A,B,C,Moves):-
	N1 is N-1,
	hanoi2(N1,A,C,B,Moves1),
	hanoi2(N1,B,A,C,Moves2),
	append(Moves1,[A-C|Moves2],Moves).

Edit 03.01.2019:
Ticket is #534

Yes, I think I agree. Which is interesting in its own right, because I think it defines another criteria for language choice.

And, folks, just to be clear:

hanoi(0,_,_,_,[]) :- !.
hanoi(N,A,B,C,Moves):-
	N1 is N-1,
	hanoi(N1,A,C,B,Moves1),
	hanoi(N1,B,A,C,Moves2),
	append(Moves1,[A-C|Moves2],Moves).

is not declarative code…

This is not declarative:

hanoi(0,_,_,_,[]) :- !.
hanoi(N,A,B,C,Moves):-
	N1 is N-1,
	hanoi(N1,A,C,B,Moves1),
	hanoi(N1,B,A,C,Moves2),
	append(Moves1,[A-C|Moves2],Moves).

And this is not correct, from barbs post, without the “to” declaration and fixed the sigleton warning:

hanoi(0,_,_,_,[]).
hanoi(N,A,B,C,Moves):-
	N1 is N-1,
	hanoi(N1,A,C,B,Moves1),
	hanoi(N1,B,A,C,Moves2),
	append(Moves1,[A-C|Moves2],Moves).

Can you spot the error? Here you see why its not correct:

A declarative solution that always terminates for
integer arguments would read:

hanoi(0,_,_,_,[]).
hanoi(N,A,B,C,Moves):-
	N > 0, N1 is N-1,  /* this guard is needed */
	hanoi(N1,A,C,B,Moves1),
	hanoi(N1,B,A,C,Moves2),
	append(Moves1,[A-C|Moves2],Moves).

But if you know that the first argument is anyway input
and positive, you can also do it this way:

hanoi(0,_,_,_,[]) :- !.
hanoi(N,A,B,C,Moves):-
	N1 is N-1, /* implicitly because of the cut its non zero */
	hanoi(N1,A,C,B,Moves1),
	hanoi(N1,B,A,C,Moves2),
	append(Moves1,[A-C|Moves2],Moves).

You cannot table predicates that do not always terminate.
For tabling you need to be more careful in your formulation.

But the testing was done with the always terminating
and cut variant. So its still open what the error was there.

Thanks for replying, but I am not sure exactly with what you are agreeing as there is no reference to an earlier post.

If you are using this site in a Internet browser you can select text with the mouse from an earlier post and quote it into your post.

There will always be some problems where the execution strategy must be carefully considered. For example, the simple definition of some problem could be a set-of-all-subsets then filter, but an efficient algorithm might be polynomial. There’s no general method to go from declaration to efficient execution.

In the case of Prolog, you can sometimes describe a problem and then describe an execution strategy and do:

?- execute_strategy(strategy_C, Problem_Definition).

The canonical strategy for prolog is backtracking but others work.

Attempt at declarative hanoi.

hanoi_difference(State1, State2, Difference) :-
 % Difference is the sequence of moves to get from State1 to State2
 % State1 and State2 are each 3 lists of disks on a peg.
 % both states must contain all numbers 1-N, and each list 
 % must be in ascending order.
 I won't write an algorithm here, but I expect it 
 can be done and might appear declarative.

hanoi(N, Steps) :-
    between(1, N, Start),
    hanoi_difference([Start, [],[]], [[], Start, []], Steps).

Doing this won’t give a better solution to the original problem, except that it’s better in being more broadly usable.

Can I ask- why is a purely declarative solution necessary for any problem? Is it so bad if a solution is 80% declartive, 60% declarative? That’s still “more declarative” than any imperative language, although I guess that’s a bit of a tautology.

The way I’ve learned things is that all Prolog code has both a declarative and an imperative reading and that one must keep both in mind simultaneously while coding to optimise the code’s efficiency. This doesn’t seem to be just my wild imagination. I help with correcting Prolog coursework at my university and we’re directed to give feedback to the students about things like tail-call optimisation, which very often means strategically placing a cut to avoid unnecessary backtracking and that in any case is a purely imperative consideration (it’s firmly in the “control” part of “logic + control”). Or, for example, using an accumulator rather than append/3 to buid-up a list because that makes for more efficient execution, even though declaratively the meaning is the same.

On a more personal note, I like Prolog’s pragmatism. It does sacrifice purity (logical or declarative) to make way for usability and efficiency. It is a real-world programming langauge that does not demand the programmer to twist herself into knots just to be able to perform ordinary tasks that are very simple to do in imperative languages (looking to you Haskell and your hare-brained puritan monads). So, yes, we have cuts and we have impure database writes and stream writes etc.But, this way, we have a language that is as close to the purely declarative paradigm as any general-purpose language has ever got. We cam program in something that looks very much like FOL. What else comes close?

So, to finally answer your question (sorry for the mini-rant) - I think that yes, we’ll always need to have an imperative component if we want to have a useable language, rather than a pretty toy.

4 Likes

Depth first search as deployed by Prolog is incomplete
for Horn clauses. For a start:

BFS v. DFS (Which algorithm is complete?)
https://stackoverflow.com/q/32553286/502187

You cannot have declarative code in Prolog and execute
it as well. You would need another interpreter (tabling
helps sometimes) to really execute declarative

code or restrict yourself what code you allow.

My question is: could it be? Or will be always need to inject imperative knowledge into our code.

I would say it depends.

Some undigested notes:

It depends on the problem and what you mean by “declarative”.

In the ideal “declarative” case, you would just enter your set of constraint about an acceptable solution, give it to the machine and the machine searches for the solution by whatever means available (limited by combinatorial explosion, i.e. space, time & energy), possibly generating the “imperative algorithm” as needed (which is approximately what Prolog does when it constructs a search tree).

(Btw, Answer Set Programming seems a lot more declarative than Prolog, check it out)

Functional programming is sometimes described as “declarative”. Well, it is about passing terms around and reducing them to generate new values based on existing ones as the machine time ticks forward. This is not state-space thinking, but it is marginally declarative. (Look for the paper “Out of the Tar Pit”)

But machines are rarely running in isolation. The declarative approach fails a bit as soon as I/O (getting random numbers, user input, IP packets or anything else from an “oracle”) is involved (the difference between non-interactive and interactive computing, or between Turing a-Machines and o-Machines), standard logic becomes inacceptable, so weird stuff shows up that can only be understood in an operational way (i.e. by looking under the hood) and/or you get to handle Monads. Also, run-time Exception handling and meta-predicates, asserts and retracts…

There is another aspect, in that imperative algorithms are “easy” to understand, at least when documented and liberally “kept on rails through state space” with assertions and strong, static typing. Declarative code, while far more compact, often looks much hairier - you have to reflect a long time about whether what you have written will yield what you want, proving little theorems about the code in your head, and this generally by resorting to the operational semantics. Anyway, there seems to be a no-free-lunch theorem at work here somewhere.

One last aspect is the separation between the problem you want to solve and the language used to describe it. When writing code in Clojure, Java, or whatever, there is no illusion about where the problem description is: it is in the abstract datastructures, or the objects and there is LARGE amount of code to make these objects usable and manipulable by both other parts of the program or the user. In logic programming, or at least for Prolog, people seems to get confused. They start to couch a problem into (a fragment of first order intuitionistic bivalent) logic (doing backwards chaining in a depth-first manner), forgetting that this is the programming language, not the problem description language. Sure the two may be congruent, but this is not necessarily the case. CHR (which may or may not match a fragment of linear logic, not sure) for example, can be used from within Prolog, but they are are different language and first need to be compiled into Prolog.

Finally, there is a problem that people are rather misinformed about logic and logic programming, in fact they have no idea at all, we have all seen some simple logic formalism and boolean logic in math classes, which seem to come from God or Aristotle himself. It’s a knowledge desert out there. People are scoffing when you tell them that there a multivalent logics. I was once in a lecture where the presenter had composed fragments of C or LISP code using genetic algorithms to generate “good enough” code. I asked whether he had tried a logic programming language, upon which the response was that he was not very interested in finding out whether X is the father or sibling of Y. Groan!

Finally finally, take a look at this for more ideas:

Kowalski 2014: History of Logic Programming

https://www.researchgate.net/publication/277670164_History_of_Logic_Programming

Carl Hewitt: Middle History of Logic Programming (sadly abrasive attack on LP with lots of “I was first” statements, which would imply that Hewitt has a brain bigger than 30 years of research, doubtful at best, but interesting nonetheless):

3 Likes