But can you do the 7-segment challenge with ILP?
Well you donât need to emulate autoencoders. When I asked for directly
available in Prolog, I didnât ask for âemulationâ, I asked for directly available
in Prolog. You can implement them directly in Prolog with not so much effort.
For example you can follow the idea of so called eForests:
AutoEncoder by Forest
Ji Feng, Zhi-Hua Zhou - 26 Sep 2017
[1709.09018] AutoEncoder by Forest
The paper once won an AAAI award. Lets first look at supervised
training. In supervised training you would give the latent space:
% https://en.wikipedia.org/wiki/Seven-segment_display
data(enc, [0,0,0,0,0,0,0], [1,1,1,1]).
data(enc, [1,1,1,1,1,1,0], [0,0,0,0]).
data(enc, [0,1,1,0,0,0,0], [0,0,0,1]).
data(enc, [1,1,0,1,1,0,1], [0,0,1,0]).
data(enc, [1,1,1,1,0,0,1], [0,0,1,1]).
data(enc, [0,1,1,0,0,1,1], [0,1,0,0]).
data(enc, [1,0,1,1,0,1,1], [0,1,0,1]).
data(enc, [1,0,1,1,1,1,1], [0,1,1,0]).
data(enc, [1,1,1,0,0,0,0], [0,1,1,1]).
data(enc, [1,1,1,1,1,1,1], [1,0,0,0]).
data(enc, [1,1,1,1,0,1,1], [1,0,0,1]).
data(dec, [1,1,1,1], [0,0,0,0,0,0,0]).
data(dec, [0,0,0,0], [1,1,1,1,1,1,0]).
data(dec, [0,0,0,1], [0,1,1,0,0,0,0]).
data(dec, [0,0,1,0], [1,1,0,1,1,0,1]).
data(dec, [0,0,1,1], [1,1,1,1,0,0,1]).
data(dec, [0,1,0,0], [0,1,1,0,0,1,1]).
data(dec, [0,1,0,1], [1,0,1,1,0,1,1]).
data(dec, [0,1,1,0], [1,0,1,1,1,1,1]).
data(dec, [0,1,1,1], [1,1,1,0,0,0,0]).
data(dec, [1,0,0,0], [1,1,1,1,1,1,1]).
data(dec, [1,0,0,1], [1,1,1,1,0,1,1]).
It takes a few milliseconds to learn a Bayes Classifier for the encoder
and a Bayes Classifier for the decoder. Works even in SWI-Prolog,
the output C is a decision tree, since its quite big I do not show it:
?- time(tree_assign_list(4, enc, S, C)).
% 39,126 inferences, 0.000 CPU in 0.022 seconds (0% CPU, Infinite Lips)
S = 0,
C = ...
?- time(tree_assign_list(7, dec, S, C)).
% 4,857 inferences, 0.000 CPU in 0.001 seconds (0% CPU, Infinite Lips)
S = 0,
C = ....
The scoring S = 0 shows that both decision trees have no error. But now
unsupervised training, it is based on the seg7 data here, which has also
some alternate digits. Its quite fun to watch how the score reaches zero:
It uses backpropagation, but not with neural networks, but instead
with decision trees. Here is an example latent space that the hidden
layer algorithm of an autoencoder finds:
?- tree_guess_list(4, seg7, 7, 100, S, E), write(S), nl,
write('----------'), nl, data(seg7, X, _),
tree_current_list(E, X, Y), write(Y), nl, fail; true.
0
----------
[1,1,1,0]
[1,1,1,1]
[0,1,1,1]
[0,0,1,1]
[0,1,1,0]
[1,0,1,0]
[1,0,1,1]
[0,0,0,1]
[1,0,0,1]
[0,1,0,1]
[0,0,0,0]
[0,0,0,0]
[0,1,0,0]
[1,1,0,1]
[1,0,0,0]
true.
Very much fun!