The problem I am trying to adress was already adressed here:
ILP and Reasoning by Analogy
Intuitively, the idea is to use what is already known to explain
new observations that appear similar to old knowledge. In a sense,
it is opposite of induction, where to explain the observations one
comes up with new hypotheses/theories.
Vesna Poprcova et al. - 2010
https://www.researchgate.net/publication/220141214
The problem consists in that ILP doesn’t try to learn and apply
analogies , whereas autoencoders and transformers typically try
to “Grok” analogies, so that with a fewer training data they can
perform well in certain domains. They will do some inferencing on the
part of the encoders also for unseen input data. And they will do
some generation on the part of the decoder also for unseen
latent space configurations from unseen input data. By unseen
data I mean data not in the training set. The full context window
may tune the inferencing and generation, which appeals to:
Analogy as a Search Procedure
Rumelhart and Abrahamson showed that when presented with
analogy problems likemokey:pig:gorilla:X
, withrabbit
,tiger
,cow
,
andelephant
as alternatives for X, subjects rank the four options
following the parallelogram rule.
Matías Osta-Vélez - 2022
https://www.researchgate.net/publication/363700634
There are learning methods that work similarly like ILP, in
that they are based on positive and negative samples. And the
statistics can involve bilinear forms.