My question is: could it be? Or will be always need to inject imperative knowledge into our code.
I would say it depends.
Some undigested notes:
It depends on the problem and what you mean by “declarative”.
In the ideal “declarative” case, you would just enter your set of constraint about an acceptable solution, give it to the machine and the machine searches for the solution by whatever means available (limited by combinatorial explosion, i.e. space, time & energy), possibly generating the “imperative algorithm” as needed (which is approximately what Prolog does when it constructs a search tree).
(Btw, Answer Set Programming seems a lot more declarative than Prolog, check it out)
Functional programming is sometimes described as “declarative”. Well, it is about passing terms around and reducing them to generate new values based on existing ones as the machine time ticks forward. This is not state-space thinking, but it is marginally declarative. (Look for the paper “Out of the Tar Pit”)
But machines are rarely running in isolation. The declarative approach fails a bit as soon as I/O (getting random numbers, user input, IP packets or anything else from an “oracle”) is involved (the difference between non-interactive and interactive computing, or between Turing a-Machines and o-Machines), standard logic becomes inacceptable, so weird stuff shows up that can only be understood in an operational way (i.e. by looking under the hood) and/or you get to handle Monads. Also, run-time Exception handling and meta-predicates, asserts and retracts…
There is another aspect, in that imperative algorithms are “easy” to understand, at least when documented and liberally “kept on rails through state space” with assertions and strong, static typing. Declarative code, while far more compact, often looks much hairier - you have to reflect a long time about whether what you have written will yield what you want, proving little theorems about the code in your head, and this generally by resorting to the operational semantics. Anyway, there seems to be a no-free-lunch theorem at work here somewhere.
One last aspect is the separation between the problem you want to solve and the language used to describe it. When writing code in Clojure, Java, or whatever, there is no illusion about where the problem description is: it is in the abstract datastructures, or the objects and there is LARGE amount of code to make these objects usable and manipulable by both other parts of the program or the user. In logic programming, or at least for Prolog, people seems to get confused. They start to couch a problem into (a fragment of first order intuitionistic bivalent) logic (doing backwards chaining in a depth-first manner), forgetting that this is the programming language, not the problem description language. Sure the two may be congruent, but this is not necessarily the case. CHR (which may or may not match a fragment of linear logic, not sure) for example, can be used from within Prolog, but they are are different language and first need to be compiled into Prolog.
Finally, there is a problem that people are rather misinformed about logic and logic programming, in fact they have no idea at all, we have all seen some simple logic formalism and boolean logic in math classes, which seem to come from God or Aristotle himself. It’s a knowledge desert out there. People are scoffing when you tell them that there a multivalent logics. I was once in a lecture where the presenter had composed fragments of C or LISP code using genetic algorithms to generate “good enough” code. I asked whether he had tried a logic programming language, upon which the response was that he was not very interested in finding out whether X is the father or sibling of Y. Groan!
Finally finally, take a look at this for more ideas:
Kowalski 2014: History of Logic Programming
https://www.researchgate.net/publication/277670164_History_of_Logic_Programming
Carl Hewitt: Middle History of Logic Programming (sadly abrasive attack on LP with lots of “I was first” statements, which would imply that Hewitt has a brain bigger than 30 years of research, doubtful at best, but interesting nonetheless):