Prolog totally missed the AI Boom

The problem I am trying to adress was already adressed here:

ILP and Reasoning by Analogy
Intuitively, the idea is to use what is already known to explain
new observations that appear similar to old knowledge. In a sense,
it is opposite of induction, where to explain the observations one
comes up with new hypotheses/theories.
Vesna Poprcova et al. - 2010
https://www.researchgate.net/publication/220141214

The problem consists in that ILP doesn’t try to learn and apply
analogies , whereas autoencoders and transformers typically try
to “Grok” analogies, so that with a fewer training data they can

perform well in certain domains. They will do some inferencing on the
part of the encoders also for unseen input data. And they will do
some generation on the part of the decoder also for unseen

latent space configurations from unseen input data. By unseen
data I mean data not in the training set. The full context window
may tune the inferencing and generation, which appeals to:

Analogy as a Search Procedure
Rumelhart and Abrahamson showed that when presented with
analogy problems like mokey:pig:gorilla:X, with rabbit, tiger, cow,
and elephant as alternatives for X, subjects rank the four options
following the parallelogram rule.
Matías Osta-Vélez - 2022
https://www.researchgate.net/publication/363700634

There are learning methods that work similarly like ILP, in
that they are based on positive and negative samples. And the
statistics can involve bilinear forms.

I just wanted to clarify, @j4n_bur53 (in the context of a message that was deleted) that I’m not angry at you, or at anyone in this conversation. I like to disagree robustly. I hope we can do this without having a fight.

Or I guess it’s the fault of the Mediterranean temperament. I also tend to wave my hands wildly and jump up and down when I really get into my stride. I kind of understand why this comes across as if I’m having some sort of crisis, but if you’re worried just watch the whites of my eyes: if they start rolling into my head, that’s when it starts to get dangerous :stuck_out_tongue:

That’s a good paper. I always reach for that when I need a good review of neural nets.

The latest Leela can play at GM strength (2500 ELO) without using tree search, just the neural net. When you add the algorithms, you get to about 3500. Keep in mind ELO is a logarithmic scale of comparative ability. Another program without a neural net would be slightly lower than the complete Leela.

LLMs cannot play chess at all. NNUE is the key. And that is self-directed reinforcement learning.

I think one of the reasons these NNs took off is the tensor coding. These allow infinite amounts of simultaneous fast computation that is hardwired into the chips that were developed for fast graphics and fast blockchain compute, and now adapted to fast NN chaining.

You inherently get more done faster because of the speed and the self-education aspect of “learning” means a lot of this is automatable.

Prolog is suitable to design problem-space search algorithms but to make them truly fast, the algorithm ought to be vectorized, parallelized. To get a small understanding of the difference, look at how a Google search is computed, compared to how we would code a database search. Google’s method is based on tensors and so much more efficient.

The Python standard library is lacking. The endless packages are there but they come with their own problems. Python is a lowest common denominator language. It is already out of vogue in the “industry” and it has become more of a mess than Java, in a shorter time. There was a point of time in the early 2000 when Python replaced Java in all the “programming for dummies” university courses offered to natural scientists, which explains a lot of its popularity among science practitioners.

As for Prolog, there is a 99% chance that if you hear about it in university (because you will certainly NOT hear about it elsewhere) it would be badly misrepresented.The multiple implementations and the ISO Prolog insanity (as seen from the sidelines) are surely not helping.

1 Like

I think Python practicioners understand a very important software
engineering principle that is important to plug and play. Its
extremly simple, you have to identify variation points and build

them into your software. Only then you can plug and play. For
example this code here uses a variation point via higher order
programming, I didn’t hard code “fun” into the trisect algorithm:

trisect(A, B, _, X) :- B - A < 1e-6, !,
   X is (A+B)/2.
trisect(A, B, F, X) :-
   P is (2*A+B)/3,
   call(F, P, U),
   Q is (A+2*B)/3,
   call(F, Q, V),
   (U < V ->
      trisect(P, B, F, X);
      trisect(A, Q, F, X)).

Now if you have another library like clp(BNR) which has nowhere
a variation point, you are done, you cannot extend it. Python practicioners
understand the principle of variation point from the viewpoint of

developing libraries and from the viewpoint of using libraries.
In Prolog not much is seen in dealing with this problem. I don’t
know a single Prolog library or pack that provides some variation

points somewhere. They are all closed. Only recently maplist/n,
foldl/n, etc.. have been added to library(lists), but this is basically
all that was done so far. Other means to create variation points

besides higher order programming via closures, like for example
provide data sources via comprehension is also not explored. For
machine learning dealing with data sources is an interesting problem,

and how to make them plug and play.

I need some examples to understand you. My experience with Pythons is that it relies heavily on pre-existing concepts that are widely shared in the programming community. This allows it to achieve this “plug-and-play” effect you describe.

For example, “just expand this class by defining your own foo()” or maybe “annotate your functions with this magic words” and so on. Most concept were well established already in Java when they found their way to Python and many Python programmers already had a lot of experience in Java when they made the transition.

Python got popular because it was easy. It was easy because Guido’s fiat insisted it be so. It was conservative in its development. There was one Python not many, and its features were simple and not proliferating.

It was also easy because it followed the popular paradigms: imperative programming and object oriented programming, and later the elementary parts of functional programming.

Python got libraries because it was popular.

Logic programming and Prolog were never easy for the traditionally educated, and so it never got popular, and so it never gained libraries.

Jan Burse has a very interesting point about extensibility of libraries. It could certainly help make things more powerful if libraries were designed to enable this.

1 Like

Python is a lowest-common denominator language, by design. But it is not out of vogue. According to Stack Overflow’s 2024 survey, more people want to learn Python than any other language, and there is a 67% rating for people who admire Python.

Is it a mess? That’s subjective, but it’s not an opinion I’ve heard before.

Truffle Prolog is an unfamiliar concept. I searched, and think you mean the Oracle system for embedding a dynamic programming language.