Autum Challenge: Short Deadfish Numbers

Solve this problem:

Deadfish is one of the best known non Turing-complete programming languages.
It has only one accumulator (which starts at 0) to store data, and only four commands:

i - Increment the accumulator
s - Square the accumulator
d - Decrement the accumulator
o - Output the accumulator

Create a program that will input a number and output Deadfish
code to display the number. It must work for any integer from 0 to 255.

https://codegolf.stackexchange.com/questions/40124/short-deadfish-numbers/

Twist, don’t write the solution in Prolog. But in your favorite Prolog
with your favorite state extension. For example Mecury states via
(!)/1 or some other Prolog language extension that helps

dealing with state.

Here’s my take (no state extensions :frowning: ):

sdn.pl
:- module(sdn, [sdn/2,eval_sdn/2]).

:- dynamic sdn_cache/5.

sdn(0,[o]) :- !.
sdn(1,[i,o]) :- !.
sdn(I,[i,i|P]) :-
    retractall(sdn_cache(_,_,_,_,_)),
    sdn(I, 2, P, [o], _).

sdn(I, A, P, T, L) :-
    sdn_cache(I, A, P, T, L),
    !.
sdn(I, I, T, T, 0) :- !.
sdn(I, A, P, T, L) :-
    sdn_(I, A, P, T, L),
    asserta(sdn_cache(I, A, P, T, L)).
sdn_(I, A, P, T, L) :-
    A < I,
    !,
    C is A * A,
    B is A + 1,
    (   C - I > I - B
    ->  sdn(I, B, P0, T, L0),
        P = [i|P0],
        L is L0 + 1
    ;   sdn(I, C, P1, T, L1),
        sdn(I, B, P0, T, L0),
        (   L0 < L1
        ->  P = [i|P0],
            L is L0 + 1
        ;   P = [s|P1],
            L is L1 + 1
        )
    ).
sdn_(I, A, [d|P], T, L) :-
    B is A - 1,
    sdn(I, B, P, T, L0),
    L is L0 + 1.

eval_sdn(P,A) :-
    eval_sdn(P,0,A).

eval_sdn([], A, A).
eval_sdn([H|T], A0, A) :-
    eval_sdn_(H,A0,A1),
    eval_sdn(T,A1,A).

eval_sdn_(i, A0, A) :- A is A0 + 1.
eval_sdn_(s, A0, A) :- A is A0 * A0.
eval_sdn_(d, A0, A) :- A is A0 - 1.
eval_sdn_(o, A, A)  :- writeln(A).

This gives:

?- forall(between(0,255,I),(time(sdn(I,P)),eval_sdn(P,I))).
% -1 inferences, 0.000 CPU in 0.000 seconds (47% CPU, -125000 Lips)
0
% -1 inferences, 0.000 CPU in 0.000 seconds (55% CPU, -166667 Lips)
1
% 2 inferences, 0.003 CPU in 0.003 seconds (98% CPU, 688 Lips)
2
% 10 inferences, 0.000 CPU in 0.000 seconds (74% CPU, 588235 Lips)
3
% 23 inferences, 0.000 CPU in 0.000 seconds (81% CPU, 1277778 Lips)
4
% 31 inferences, 0.000 CPU in 0.000 seconds (77% CPU, 1550000 Lips)
5
% 39 inferences, 0.000 CPU in 0.000 seconds (80% CPU, 1258065 Lips)
6
% 64 inferences, 0.000 CPU in 0.000 seconds (86% CPU, 1684211 Lips)
7
% 65 inferences, 0.000 CPU in 0.000 seconds (82% CPU, 1805556 Lips)
8
% 67 inferences, 0.000 CPU in 0.000 seconds (85% CPU, 1763158 Lips)
9
% 75 inferences, 0.000 CPU in 0.000 seconds (86% CPU, 1829268 Lips)
10
% 118 inferences, 0.000 CPU in 0.000 seconds (86% CPU, 1843750 Lips)
11
% 120 inferences, 0.000 CPU in 0.000 seconds (84% CPU, 1875000 Lips)
12
% 123 inferences, 0.000 CPU in 0.000 seconds (87% CPU, 1921875 Lips)
13
% 124 inferences, 0.000 CPU in 0.000 seconds (81% CPU, 1850746 Lips)
14
% 126 inferences, 0.000 CPU in 0.000 seconds (85% CPU, 947368 Lips)
15
% 187 inferences, 0.000 CPU in 0.000 seconds (86% CPU, 1255034 Lips)
16
% 189 inferences, 0.000 CPU in 0.000 seconds (86% CPU, 1235294 Lips)
17
% 191 inferences, 0.000 CPU in 0.000 seconds (87% CPU, 1193750 Lips)
18
% 193 inferences, 0.000 CPU in 0.000 seconds (86% CPU, 1054645 Lips)
19
% 195 inferences, 0.000 CPU in 0.000 seconds (87% CPU, 1189024 Lips)
20
% 197 inferences, 0.000 CPU in 0.000 seconds (86% CPU, 1223602 Lips)
21
% 269 inferences, 0.000 CPU in 0.000 seconds (88% CPU, 1154506 Lips)
22
% 271 inferences, 0.000 CPU in 0.000 seconds (87% CPU, 1575581 Lips)
23
% 273 inferences, 0.000 CPU in 0.000 seconds (87% CPU, 1605882 Lips)
24
% 275 inferences, 0.000 CPU in 0.000 seconds (87% CPU, 1608187 Lips)
25
% 277 inferences, 0.000 CPU in 0.000 seconds (87% CPU, 1573864 Lips)
26
% 279 inferences, 0.000 CPU in 0.000 seconds (88% CPU, 989362 Lips)
27
% 281 inferences, 0.000 CPU in 0.000 seconds (88% CPU, 1185654 Lips)
28
% 366 inferences, 0.000 CPU in 0.000 seconds (88% CPU, 1224080 Lips)
29
% 368 inferences, 0.000 CPU in 0.000 seconds (88% CPU, 1210526 Lips)
30
% 370 inferences, 0.000 CPU in 0.000 seconds (88% CPU, 1213115 Lips)
31
% 371 inferences, 0.000 CPU in 0.000 seconds (88% CPU, 1466403 Lips)
32
% 373 inferences, 0.000 CPU in 0.000 seconds (88% CPU, 1457031 Lips)
33
% 375 inferences, 0.000 CPU in 0.000 seconds (88% CPU, 1404494 Lips)
34
% 377 inferences, 0.000 CPU in 0.000 seconds (89% CPU, 1406716 Lips)
35
% 379 inferences, 0.000 CPU in 0.000 seconds (89% CPU, 1403704 Lips)
36
% 476 inferences, 0.000 CPU in 0.000 seconds (89% CPU, 1408284 Lips)
37
% 478 inferences, 0.000 CPU in 0.000 seconds (89% CPU, 1377522 Lips)
38
% 480 inferences, 0.000 CPU in 0.000 seconds (90% CPU, 1445783 Lips)
39
% 482 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 950690 Lips)
40
% 484 inferences, 0.000 CPU in 0.000 seconds (90% CPU, 1301075 Lips)
41
% 486 inferences, 0.000 CPU in 0.000 seconds (90% CPU, 1302949 Lips)
42
% 488 inferences, 0.000 CPU in 0.000 seconds (90% CPU, 1277487 Lips)
43
% 489 inferences, 0.000 CPU in 0.000 seconds (90% CPU, 1124138 Lips)
44
% 491 inferences, 0.000 CPU in 0.000 seconds (90% CPU, 1252551 Lips)
45
% 600 inferences, 0.000 CPU in 0.000 seconds (89% CPU, 1360544 Lips)
46
% 602 inferences, 0.001 CPU in 0.001 seconds (96% CPU, 517627 Lips)
47
% 604 inferences, 0.000 CPU in 0.000 seconds (88% CPU, 1830303 Lips)
48
% 606 inferences, 0.000 CPU in 0.000 seconds (89% CPU, 1792899 Lips)
49
% 608 inferences, 0.000 CPU in 0.000 seconds (89% CPU, 1767442 Lips)
50
% 610 inferences, 0.000 CPU in 0.000 seconds (89% CPU, 1502463 Lips)
51
% 612 inferences, 0.001 CPU in 0.001 seconds (89% CPU, 976077 Lips)
52
% 614 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1060449 Lips)
53
% 616 inferences, 0.000 CPU in 0.001 seconds (90% CPU, 1299578 Lips)
54
% 618 inferences, 0.000 CPU in 0.000 seconds (89% CPU, 1457547 Lips)
55
% 739 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1282986 Lips)
56
% 741 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1224793 Lips)
57
% 742 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1257627 Lips)
58
% 744 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1461690 Lips)
59
% 746 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1404896 Lips)
60
% 748 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1435701 Lips)
61
% 750 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1409774 Lips)
62
% 752 inferences, 0.001 CPU in 0.002 seconds (96% CPU, 516838 Lips)
63
% 754 inferences, 0.000 CPU in 0.000 seconds (88% CPU, 1830097 Lips)
64
% 756 inferences, 0.001 CPU in 0.001 seconds (88% CPU, 1138554 Lips)
65
% 758 inferences, 0.001 CPU in 0.001 seconds (88% CPU, 1191824 Lips)
66
% 891 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1197581 Lips)
67
% 893 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1242003 Lips)
68
% 895 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1110422 Lips)
69
% 897 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1220408 Lips)
70
% 900 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1444623 Lips)
71
% 901 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1427892 Lips)
72
% 903 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1413146 Lips)
73
% 904 inferences, 0.002 CPU in 0.002 seconds (95% CPU, 591623 Lips)
74
% 906 inferences, 0.001 CPU in 0.001 seconds (89% CPU, 1709434 Lips)
75
% 908 inferences, 0.001 CPU in 0.001 seconds (89% CPU, 1709981 Lips)
76
% 910 inferences, 0.001 CPU in 0.001 seconds (89% CPU, 1059371 Lips)
77
% 912 inferences, 0.001 CPU in 0.001 seconds (89% CPU, 1008850 Lips)
78
% 1,057 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1183651 Lips)
79
% 1,059 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1159912 Lips)
80
% 1,061 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1128723 Lips)
81
% 1,063 inferences, 0.002 CPU in 0.002 seconds (95% CPU, 668553 Lips)
82
% 1,065 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1475069 Lips)
83
% 1,067 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1459644 Lips)
84
% 1,069 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1436828 Lips)
85
% 1,071 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1465116 Lips)
86
% 1,073 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1546110 Lips)
87
% 1,075 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1535714 Lips)
88
% 1,077 inferences, 0.002 CPU in 0.002 seconds (95% CPU, 476549 Lips)
89
% 1,079 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1259043 Lips)
90
% 1,081 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1342857 Lips)
91
% 1,237 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1211557 Lips)
92
% 1,239 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1333692 Lips)
93
% 1,241 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1334409 Lips)
94
% 1,243 inferences, 0.002 CPU in 0.002 seconds (95% CPU, 739001 Lips)
95
% 1,245 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1335837 Lips)
96
% 1,247 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1269857 Lips)
97
% 1,249 inferences, 0.001 CPU in 0.001 seconds (92% CPU, 1288958 Lips)
98
% 1,251 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1470035 Lips)
99
% 1,253 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 742739 Lips)
100
% 1,255 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 946456 Lips)
101
% 1,257 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1161738 Lips)
102
% 1,259 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1151876 Lips)
103
% 1,261 inferences, 0.001 CPU in 0.001 seconds (90% CPU, 1227848 Lips)
104
% 1,263 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 794340 Lips)
105
% 1,432 inferences, 0.001 CPU in 0.001 seconds (92% CPU, 1175698 Lips)
106
% 1,434 inferences, 0.001 CPU in 0.001 seconds (89% CPU, 1211149 Lips)
107
% 1,436 inferences, 0.001 CPU in 0.001 seconds (92% CPU, 1223169 Lips)
108
% 1,438 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 719360 Lips)
109
% 1,440 inferences, 0.001 CPU in 0.001 seconds (92% CPU, 1226576 Lips)
110
% 1,442 inferences, 0.001 CPU in 0.001 seconds (92% CPU, 1209732 Lips)
111
% 1,443 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1300000 Lips)
112
% 1,445 inferences, 0.003 CPU in 0.003 seconds (94% CPU, 543846 Lips)
113
% 1,447 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1153907 Lips)
114
% 1,449 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1231096 Lips)
115
% 1,451 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1384542 Lips)
116
% 1,453 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 796601 Lips)
117
% 1,455 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1463783 Lips)
118
% 1,457 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1429833 Lips)
119
% 1,459 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1321558 Lips)
120
% 1,640 inferences, 0.002 CPU in 0.002 seconds (95% CPU, 719930 Lips)
121
% 1,642 inferences, 0.001 CPU in 0.002 seconds (92% CPU, 1100536 Lips)
122
% 1,644 inferences, 0.001 CPU in 0.002 seconds (92% CPU, 1171775 Lips)
123
% 1,646 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 781206 Lips)
124
% 1,648 inferences, 0.002 CPU in 0.002 seconds (92% CPU, 785510 Lips)
125
% 1,650 inferences, 0.002 CPU in 0.002 seconds (92% CPU, 989802 Lips)
126
% 1,652 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 727113 Lips)
127
% 1,654 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1291179 Lips)
128
% 1,656 inferences, 0.001 CPU in 0.002 seconds (90% CPU, 1212299 Lips)
129
% 1,658 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 849821 Lips)
130
% 1,660 inferences, 0.001 CPU in 0.002 seconds (92% CPU, 1199422 Lips)
131
% 1,662 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1278462 Lips)
132
% 1,664 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 800000 Lips)
133
% 1,665 inferences, 0.001 CPU in 0.001 seconds (92% CPU, 1306907 Lips)
134
% 1,667 inferences, 0.001 CPU in 0.001 seconds (91% CPU, 1250563 Lips)
135
% 1,669 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 879347 Lips)
136
% 1,862 inferences, 0.003 CPU in 0.003 seconds (92% CPU, 734227 Lips)
137
% 1,864 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 910601 Lips)
138
% 1,866 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 713849 Lips)
139
% 1,868 inferences, 0.002 CPU in 0.002 seconds (92% CPU, 1125301 Lips)
140
% 1,870 inferences, 0.002 CPU in 0.002 seconds (92% CPU, 1090379 Lips)
141
% 1,872 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 694105 Lips)
142
% 1,874 inferences, 0.002 CPU in 0.002 seconds (92% CPU, 1067198 Lips)
143
% 1,876 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1100939 Lips)
144
% 1,878 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 704955 Lips)
145
% 1,880 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1150551 Lips)
146
% 1,882 inferences, 0.002 CPU in 0.002 seconds (92% CPU, 1149664 Lips)
147
% 1,884 inferences, 0.002 CPU in 0.003 seconds (95% CPU, 787625 Lips)
148
% 1,886 inferences, 0.003 CPU in 0.003 seconds (92% CPU, 731008 Lips)
149
% 1,888 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 912959 Lips)
150
% 1,890 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 706542 Lips)
151
% 1,892 inferences, 0.002 CPU in 0.002 seconds (92% CPU, 1092379 Lips)
152
% 1,894 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1073696 Lips)
153
% 2,099 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 685500 Lips)
154
% 2,101 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1001907 Lips)
155
% 2,103 inferences, 0.003 CPU in 0.003 seconds (94% CPU, 763894 Lips)
156
% 2,105 inferences, 0.002 CPU in 0.002 seconds (92% CPU, 1067444 Lips)
157
% 2,106 inferences, 0.003 CPU in 0.003 seconds (94% CPU, 792922 Lips)
158
% 2,108 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1137615 Lips)
159
% 2,110 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 822612 Lips)
160
% 2,112 inferences, 0.003 CPU in 0.003 seconds (93% CPU, 727523 Lips)
161
% 2,114 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 654692 Lips)
162
% 2,116 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1073022 Lips)
163
% 2,118 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 835503 Lips)
164
% 2,120 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1143474 Lips)
165
% 2,122 inferences, 0.002 CPU in 0.003 seconds (95% CPU, 867539 Lips)
166
% 2,124 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1130990 Lips)
167
% 2,126 inferences, 0.002 CPU in 0.003 seconds (94% CPU, 869530 Lips)
168
% 2,128 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1107756 Lips)
169
% 2,130 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 832682 Lips)
170
% 2,132 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1144391 Lips)
171
% 2,349 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 828279 Lips)
172
% 2,351 inferences, 0.003 CPU in 0.004 seconds (93% CPU, 722717 Lips)
173
% 2,353 inferences, 0.004 CPU in 0.004 seconds (94% CPU, 652343 Lips)
174
% 2,355 inferences, 0.002 CPU in 0.003 seconds (93% CPU, 978803 Lips)
175
% 2,357 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 720575 Lips)
176
% 2,359 inferences, 0.002 CPU in 0.003 seconds (93% CPU, 955835 Lips)
177
% 2,361 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 733686 Lips)
178
% 2,363 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1049290 Lips)
179
% 2,365 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 790441 Lips)
180
% 2,367 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1103497 Lips)
181
% 2,369 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 875786 Lips)
182
% 2,371 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1167980 Lips)
183
% 2,372 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 868546 Lips)
184
% 2,374 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1140798 Lips)
185
% 2,376 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 847663 Lips)
186
% 2,378 inferences, 0.002 CPU in 0.002 seconds (93% CPU, 1166258 Lips)
187
% 2,380 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 815627 Lips)
188
% 2,382 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 1052120 Lips)
189
% 2,384 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 820937 Lips)
190
% 2,613 inferences, 0.002 CPU in 0.003 seconds (93% CPU, 1082436 Lips)
191
% 2,615 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 799694 Lips)
192
% 2,617 inferences, 0.002 CPU in 0.003 seconds (94% CPU, 1047638 Lips)
193
% 2,619 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 805846 Lips)
194
% 2,621 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 1195712 Lips)
195
% 2,623 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 903548 Lips)
196
% 2,625 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 1291831 Lips)
197
% 2,627 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 967587 Lips)
198
% 2,629 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 1332489 Lips)
199
% 2,631 inferences, 0.004 CPU in 0.004 seconds (96% CPU, 684978 Lips)
200
% 2,633 inferences, 0.003 CPU in 0.003 seconds (94% CPU, 1039069 Lips)
201
% 2,635 inferences, 0.003 CPU in 0.003 seconds (94% CPU, 800912 Lips)
202
% 2,637 inferences, 0.002 CPU in 0.003 seconds (94% CPU, 1118795 Lips)
203
% 2,639 inferences, 0.003 CPU in 0.004 seconds (95% CPU, 791304 Lips)
204
% 2,641 inferences, 0.002 CPU in 0.003 seconds (94% CPU, 1100875 Lips)
205
% 2,643 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 845219 Lips)
206
% 2,645 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 1281492 Lips)
207
% 2,647 inferences, 0.003 CPU in 0.003 seconds (96% CPU, 869294 Lips)
208
% 2,649 inferences, 0.002 CPU in 0.002 seconds (94% CPU, 1306213 Lips)
209
% 2,651 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 937412 Lips)
210
% 2,892 inferences, 0.002 CPU in 0.003 seconds (94% CPU, 1176566 Lips)
211
% 2,893 inferences, 0.003 CPU in 0.004 seconds (96% CPU, 838551 Lips)
212
% 2,895 inferences, 0.003 CPU in 0.004 seconds (93% CPU, 852725 Lips)
213
% 2,897 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 942420 Lips)
214
% 2,899 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 986390 Lips)
215
% 2,901 inferences, 0.003 CPU in 0.003 seconds (94% CPU, 1030551 Lips)
216
% 2,903 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 933140 Lips)
217
% 2,905 inferences, 0.003 CPU in 0.004 seconds (95% CPU, 830237 Lips)
218
% 2,907 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 1044932 Lips)
219
% 2,909 inferences, 0.004 CPU in 0.004 seconds (95% CPU, 829957 Lips)
220
% 2,911 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 904318 Lips)
221
% 2,913 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 967774 Lips)
222
% 2,915 inferences, 0.003 CPU in 0.003 seconds (93% CPU, 910084 Lips)
223
% 2,917 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 969425 Lips)
224
% 2,919 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 1006205 Lips)
225
% 2,921 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 1016000 Lips)
226
% 2,923 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 1026334 Lips)
227
% 2,925 inferences, 0.003 CPU in 0.003 seconds (94% CPU, 903335 Lips)
228
% 2,927 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 1023785 Lips)
229
% 2,929 inferences, 0.003 CPU in 0.004 seconds (95% CPU, 837815 Lips)
230
% 2,931 inferences, 0.004 CPU in 0.004 seconds (94% CPU, 833855 Lips)
231
% 3,184 inferences, 0.004 CPU in 0.004 seconds (95% CPU, 903519 Lips)
232
% 3,186 inferences, 0.004 CPU in 0.004 seconds (94% CPU, 756590 Lips)
233
% 3,188 inferences, 0.004 CPU in 0.004 seconds (95% CPU, 870800 Lips)
234
% 3,190 inferences, 0.004 CPU in 0.004 seconds (94% CPU, 837270 Lips)
235
% 3,193 inferences, 0.003 CPU in 0.004 seconds (95% CPU, 939118 Lips)
236
% 3,195 inferences, 0.004 CPU in 0.004 seconds (94% CPU, 873428 Lips)
237
% 3,198 inferences, 0.003 CPU in 0.004 seconds (95% CPU, 948680 Lips)
238
% 3,199 inferences, 0.004 CPU in 0.004 seconds (95% CPU, 850120 Lips)
239
% 3,201 inferences, 0.003 CPU in 0.004 seconds (95% CPU, 927826 Lips)
240
% 3,203 inferences, 0.004 CPU in 0.004 seconds (94% CPU, 855731 Lips)
241
% 3,204 inferences, 0.004 CPU in 0.004 seconds (95% CPU, 871363 Lips)
242
% 3,206 inferences, 0.003 CPU in 0.004 seconds (95% CPU, 959019 Lips)
243
% 3,208 inferences, 0.003 CPU in 0.003 seconds (95% CPU, 982843 Lips)
244
% 3,210 inferences, 0.004 CPU in 0.005 seconds (94% CPU, 742884 Lips)
245
% 3,212 inferences, 0.003 CPU in 0.004 seconds (95% CPU, 941383 Lips)
246
% 3,214 inferences, 0.004 CPU in 0.005 seconds (95% CPU, 741407 Lips)
247
% 3,216 inferences, 0.003 CPU in 0.004 seconds (95% CPU, 921226 Lips)
248
% 3,218 inferences, 0.004 CPU in 0.004 seconds (94% CPU, 837803 Lips)
249
% 3,220 inferences, 0.004 CPU in 0.004 seconds (95% CPU, 796635 Lips)
250
% 3,222 inferences, 0.004 CPU in 0.004 seconds (95% CPU, 864966 Lips)
251
% 3,224 inferences, 0.003 CPU in 0.004 seconds (95% CPU, 950472 Lips)
252
% 3,226 inferences, 0.004 CPU in 0.004 seconds (95% CPU, 897857 Lips)
253
% 3,491 inferences, 0.004 CPU in 0.004 seconds (96% CPU, 904404 Lips)
254
% 3,493 inferences, 0.005 CPU in 0.005 seconds (95% CPU, 771934 Lips)
255
true.

Thank you for sharing your solution.

Do you say your solution does compute some short expression? Like
this Picat by Vasil Diadov, a mod of Sergey Dymchenko:

19 ioiiso

https://codegolf.stackexchange.com/a/51794

The above is a little tricky, since it first prints “1” and then it prints “9”.
Which makes it very short. With your Prolog solution I get:

?- sdn(19,P).
P = [i, i, s, s, i, i, i, o].

Which only prints at the end, making it longer. Because of this printing
mechanism, a solution could profit from a state mechanism and a

more concise formulation, since there is not only the Deadfish
accumulator state, but also the Deadfish output state.

Very cool, I didn’t consider printing each digit separately, that indeed adds more room for generating shorter programs.

Here’s my simple-minded solution, using @jan 's recent dict-based replacement for EDCG (which I rather like) and “lambda expressions”. I haven’t looked at anyone else’s code; I suspect mine isn’t the best (for example, computing the shortest code for 90 [i,i,i,s,s,i,i,i,i,i,i,i,i,i,o] took 25 seconds (229,810,315 inferences).
(I also found a problem with library(lambda), which I’ll post separately)

My score (the sum of all lengths for 1…255, as specified in the challenge) is:

?- time(score_deadfish(Score)).
% 1,093,459,368 inferences, 111.959 CPU in 111.964 seconds (100% CPU, 9766574 Lips)
Score = 2060.

The code runs about 10% faster if the “optimise” flag is set on.

deadfish.pl
:- use_module(library(yall)).

% include Jan's term-expansion code from https://swi-prolog.discourse.group/t/dealing-with-state/6866/10

min_deadfish(Num, Ops) :-
    between(1, 100000000, OpsLen), % iterative deepening
    deadfish(Num, OpsLen, Ops),
    !.

deadfish(Num, OpsLen, Ops) :-
    number_digits(Num, Ds),
    State0 = #{acc:0, ops:Ops, output:Ds},
    State = #{acc:_, ops:[], output:[]},
    call_dcg(seq_of_len(OpsLen, deadfish), State0, State).

seq_of_len(0, _) --> [].
seq_of_len(Len, P) -->
    { Len > 0 },
    call(P),
    { Len2 is Len - 1 },
    seq_of_len(Len2, P).

deadfish -->
    [s]<ops,
    [S0,S]>>(S is S0**2)<acc.
deadfish -->
    [i]<ops,
    [S0,S]>>(S is S0+1)<acc.
deadfish -->
    [d]<ops,
    [S0,S]>>(S is S0+1)<acc.
deadfish -->
    [o]<ops,
    value(Acc)<acc,
    { number_digits(Acc, Digits) },
    output_digits(Digits).

value(V, V, V).

number_digits(Number, Digits) :-
    number_codes(Number, Codes),
    maplist(code_digit, Codes, Digits).

code_digit(Code, Digit) :-
    Digit is Code - 0'0.  % '

output_digits([]) --> [].
output_digits([D|Ds]) -->
    [D]<output,
    output_digits(Ds).

Compute score
score_deadfish(Score) :-
    findall(X, (between(1,255,N),
                deadfish_time(N, X)), Xs),
    maplist(deadfish_length, Xs, Lens),
    sum_list(Lens, Score).

deadfish_length(_Inferences:_N:_Ops:Len:_Cpu, Len).

deadfish_time(N, X) :-
    call_time(min_deadfish(N, Ops),T),
    length(Ops, Len),
    X=T.inferences:N:Ops:Len:T.cpu.
1 Like

Thank you for sharing your solution.

Code golf Picat solution needs this much time for all shortest codes:

CPU time 2.2 seconds

Maybe some tabling and dynamic programming would help?
Can dicts be SWI-Prolog tabled? Can DCG be SWI-Prolog tabled?

Picat has a module planner, and a predicate best_plan/2,
which is used in the code golf website Picat solution. The Picat
book only contains a simple solution to the dead fish problem,

not with multiple outputs, unlike the code golf website solution,
which is a little bit more elaborate than the Picat book solution:

Constraint Solving and Planning with Picat
http://picat-lang.org/picatbook2015/constraint_solving_and_planning_with_picat.pdf

I compared the two Picat programs at the code golf site; they ran in similar times, so the tabling doesn’t seem to help. What does seem to help are various restrictions in the code, such as accumulator can’t be negative, don’t use the “s” operator if the accumulator is greater than the target number, etc. (and there’s a mysterious test for square not equal 16 in the Picat code).

I applied some of these to my code and it ran about twice as fast.

Dynamic programming or tabling requires having sub-problems. The main predicate in my code is deadfish(+Num,+OpsLen,-Ops), so there’s nothing to re-use as far as I can tell. It might make sense to cache sequences of operators with their corresponding accumulator and outputs, but I don’t see an obvious way of doing that with tabling (caching with assertz/1 would be possible, I think) … I think that the Picat code does this, but I find it a bit criptic. Presumably, I could change my Prolog code to do things in a similar way because the Picat code seems to use a table(+,-,min) predicate – that change would double the size of my program, but might be worth it if I can get a 10x speedup.

Presumably, but @jan is the person to ask.

With some caveats, yes. There’s a section in the XSB documents on the issues involved with tabling DCGs and some work-arounds.
(I’ve used tabling with a DCG parser to handle left-recursion, but that didn’t involve the difference lists.)

You have to search backwards. If the goal is to produce 90,
going backwards after a hypothetical step “i” generates a sub goal
89 which you can reuse. Think of tabling 1…255 as posed in the problem.

Maybe you need to table a little bit more than 1…255, since a
hypothetical step “d” generates a sub goal 91, and so on, you
can exceed the range 255. But you can also search forward,

but backward makes more sense, since for step “s” you could
use sqrt/1 evaluable function to go backward, which is not always
integer, so you have a nice filter. Maybe should try both

backward and forward, and see what works better.

Edit 07.10.2023
Instead of iterative deepening one could also use heuristically an
upper and lower bound, for where the search is allowed to go,
plus a cycle test constraint, i.e. the steps should not lead back to

an already obtained result. Not sure how even this works out.

I don’t see how that helps, because the sequences are non-monotonic. For example:

87:iiisdodo
88:iiisdoo
89:iiisdoio
90:iiissiiiiiiiiio
91:iiisoddddddddo
92:iiisodddddddo
``

Backward search, not sure whether Picat can deduce it from
the forward search, is already faster without tabling. Here your 90
example replicated. But my code is still stupid, cannot handle

intermediate output steps of dead fish. But the approach is quite
simple, I use first findall/3 and then keysort/2 to minimize
during dynamic programming, otherwise it also uses iterative

deepening. The dynamic programming minarg is this here:

% minimize_backward(+Integer, +Integer, -Integer, -List)
[...]
minimize_backward(N, D, J, [C|L]) :- D > 0, E is D-1,
   findall(K-C-L, (member(C, [i, d, s]),
                   step_backward(C, N, M),
                   minimize_backward(M, E, K, L)), S),
   keysort(S, [K-C-L|_]),
   J is K+1.
[...]

The result is this here:

?- time(short(90,R)), write(R), nl.
% 855,088 inferences, 0.094 CPU in 0.079 seconds (119% CPU, 9120939 Lips)
[i,i,i,s,s,i,i,i,i,i,i,i,i,i,o]

Full source code (no tabling, no output step, backward minimize):

short.p.log (1,9 KB)

Do you mean 0.025 seconds, i.e. 25 milliseconds? If it were
25 seconds for one solution, I don’t understand how you get the
below result. Maybe you measured accumulated 1…90 ?

Edit 07.10.2023
It could be that the Picat solution suffers from forward search.
Here is already a hand rolled memoization solution. Still no
output step, but tabling and backward search. Its ca. 150-times

faster than the Picat planner solution, only 15 milliseconds!

Ů©?- time(total(S,M,C)).
% 109,751 inferences, 0.016 CPU in 0.015 seconds (103% CPU, 7024064 Lips)
S = 3215,
M = 22,
C = 255.

S is the length of the output strings, if we add 255 we get what the
output would be with one newline character as well. M is the maximum
depth that occured, C is the number of computed solutions.

Full source code (memoization, no output step, backward minimize)

short2.p.log (2,6 KB)

A few individual items (such as 90) took a lot longer than most of the others.

Anyway, with similar optimisations that the Picat program uses, it now takes 9 seconds for all solutions and 1.4 seconds for solving 90.

Wow. I need to study this. :slight_smile:

I only renamed the heads of minimize_backward/4 and added:

:- dynamic memo/4.

% minimize_backward(+Integer, +Integer, -Integer, -List)
minimize_backward(N, D, K, L) :-
   memo(N, D, K, L), !, L \== no.
minimize_backward(N, D, K, L) :-
   minimize_backward2(N, D, K, L), !,
   assertz(memo(N, D, K, L)).
minimize_backward(N, D, _, _) :-
   assertz(memo(N, D, 0, no)), fail.

The above memoization is only good for mode (+,+,-,-). And then:

% total(+Integer, -Integer, -Integer)
total(S, M, C) :-
   aggregate_all(x(sum(K), max(K), count), (between(1,255,N),
                          short(N,L),
                          length(L,K)), x(S,M,C)).

Edit 07.10.2023
I guess I don’t need the output parameter K, it will equal
the input parameter D, if we do iterative deepening. Oki Doki.
Will do a new version, maybe its even faster.

minarg then becomes a cut (!). Using the cut trick, and also
incorporating I get this program which computes the same
as Picat, i.e. 2060 summed string length, in 0.039 seconds:

Obsfuscated source code (memoization, output step, backward minimize):

leet.p.log (600 Bytes)

Almost as short as the Picat solution, that is cheating not
counting what is behind the statement import planner. I
guess I lost track of the state question and was in golf mode.

It seems that about half of the time is spend in get_dict/5, so maybe it would be better for the term expansion to use something like library(record) rather than a dict, even though the latter is probably more flexible.

Thanks for this interesting challenge !

Here is my solution using pure DCG since it is my favorite state extension ^^

deadfish.pl
deadfish(Prog, I) :-
   length(Prog, _),
   when((ground(I);ground(Digits)), number_chars(I, [D | Digits])),
   dif(D, '0'),
   phrase(deadfish(Prog, o, 0), [D | Digits]).

deadfish([], o, _) -->
   [].
deadfish([o | Prog], _, I) -->
   {
      I >= 0,
      number_chars(I, Digits)
   },
   Digits,
   deadfish(Prog, o, I).
deadfish([i | Prog], P, I) -->
   {
      dif(P, d),
      I1 is I + 1,
      deadfish_(I1, I2)
   },
   deadfish(Prog, i, I2).
deadfish([d | Prog], P, I) -->
   {
      dif(P, i),
      I1 is I - 1,
      deadfish_(I1, I2)
   },
   deadfish(Prog, d, I2).
deadfish([s | Prog], _, I) -->
   {
      dif(I, 0), dif(I, 1),
      I1 is I * I,
      deadfish_(I1, I2)
   },
   deadfish(Prog, s, I2).
deadfish_(I1, I2) :-
   (  I1 == -1
   -> I2 = 0
   ;  I1 == 256
   -> I2 = 0
   ;  I2 = I1
   ).

Here is the query I use to generate all solution, print them and compute the final score:

?- numlist(1, 255, Is), time(maplist(deadfish, Progs, Is)),
maplist([Prog, I]>>(format("~|~t~d~3+ ~s~n", [I, Prog])), Progs, Is),
maplist(length, Progs, Lens), sumlist(Lens, ProgsLength),
read_file_to_codes('deadfish.pl', Codes, []), length(Codes, CodeLength),
Score is ProgsLength + CodeLength.
% 8,086,608 inferences, 0.617 CPU in 0.619 seconds (100% CPU, 13106132 Lips)
  1 io
  2 iio
  3 iiio
  4 iiso
  5 iisio
  6 iisiio
  7 iiisddo
  8 iiisdo
  9 iiiso
 10 iodo
 11 ioo
 12 ioio
 13 ioiio
 14 ioiso
 15 ioisio
 16 iisso
 17 iissio
 18 ioiisdo
 19 ioiiso
 20 iioddo
 21 iiodo
 22 iioo
 23 iioio
 24 iioso
 25 iiosio
 26 iiosiio
 27 iioisddo
 28 iioisdo
 29 iioiso
 30 iiioisso
 31 iiioddo
 32 iiiodo
 33 iiioo
 34 iiioio
 35 iiioiio
 36 iisiiso
 37 iiiosddo
 38 iiiosdo
 39 iiioso
 40 iisosso
 41 iisodddo
 42 iisoddo
 43 iisodo
 44 iisoo
 45 iisoio
 46 iisoiio
 47 iisoiiio
 48 iisodsdo
 49 iisodso
 50 iiisddsio
 51 iiisddsiio
 52 iisiodddo
 53 iisioddo
 54 iisiodo
 55 iisioo
 56 iisioio
 57 iisioiio
 58 iisioiiio
 59 iisioddso
 60 iiisdsddddo
 61 iiisdsdddo
 62 iiisdsddo
 63 iiisdsdo
 64 iiisdso
 65 iiisdsio
 66 iisiioo
 67 iisiioio
 68 iisiioiio
 69 iisiioiiio
 70 iiisddodddsso
 71 iiisddoddddddo
 72 iiisddodddddo
 73 iiisddoddddo
 74 iiisddodddo
 75 iiisddoddo
 76 iiisddodo
 77 iiisddoo
 78 iiisddoio
 79 iiissddo
 80 iiissdo
 81 iiisso
 82 iiissio
 83 iiissiio
 84 iiissiiio
 85 iiisdodddo
 86 iiisdoddo
 87 iiisdodo
 88 iiisdoo
 89 iiisdoio
 90 iiisodddddsso
 91 iiisoddddddddo
 92 iiisodddddddo
 93 iiisoddddddo
 94 iiisodddddo
 95 iiisoddddo
 96 iiisodddo
 97 iiisoddo
 98 iiisodo
 99 iiisoo
100 iodoo
101 iodoio
102 iodoiio
103 iodoiiio
104 iodoiiso
105 iodoiisio
106 iodoiisiio
107 iiisiodddo
108 iiisioddo
109 iiisiodo
110 ioodo
111 iooo
112 iooio
113 iooiio
114 iooiso
115 iooisio
116 ioisso
117 ioissio
118 iooiisdo
119 iooiiso
120 ioioddo
121 ioiodo
122 ioioo
123 ioioio
124 ioioso
125 ioiosio
126 ioiosiio
127 ioioisddo
128 ioioisdo
129 ioioiso
130 ioiioisso
131 ioiioddo
132 ioiiodo
133 ioiioo
134 ioiioio
135 ioiioiio
136 ioisiiso
137 ioiiosddo
138 ioiiosdo
139 ioiioso
140 ioisosso
141 ioisodddo
142 ioisoddo
143 ioisodo
144 ioisoo
145 ioisoio
146 ioisoiio
147 ioisoiiio
148 ioisodsdo
149 ioisodso
150 iissdoiso
151 iissdoisio
152 ioisiodddo
153 ioisioddo
154 ioisiodo
155 ioisioo
156 ioisioio
157 ioisioiio
158 ioisioiiio
159 ioisioddso
160 iissoso
161 iissosio
162 iissosiio
163 ioiisdsdo
164 ioiisdso
165 ioiisdsio
166 ioisiioo
167 ioisiioio
168 ioisiioiio
169 iissdddso
170 iissiodso
171 iissiodsio
172 iissiodsiio
173 iissiodsiiio
174 ioiisddodddo
175 ioiisddoddo
176 ioiisddodo
177 ioiisddoo
178 ioiisddoio
179 ioiissddo
180 ioiissdo
181 ioiisso
182 ioiissio
183 ioiissiio
184 ioiissiiio
185 ioiisdodddo
186 ioiisdoddo
187 ioiisdodo
188 ioiisdoo
189 ioiisdoio
190 iissiiiodddso
191 iissddsdddddo
192 iissddsddddo
193 iissddsdddo
194 iissddsddo
195 iissddsdo
196 iissddso
197 ioiisoddo
198 ioiisodo
199 ioiisoo
200 iioddoo
201 iioddoio
202 iioddoiio
203 iioddoiiio
204 iioddoiiso
205 iioddoiisio
206 iioddoiisiio
207 iioddoiiisddo
208 iioddoiiisdo
209 iioddoiiiso
210 iioisio
211 iiodoo
212 iiodoio
213 iiodoiio
214 iiodoiso
215 iiossdo
216 iiosso
217 iiossio
218 iiossiio
219 iiodoiiso
220 iiooddo
221 iioodo
222 iiooo
223 iiooio
224 iiooso
225 iioosio
226 iioosiio
227 iiooisddo
228 iiooisdo
229 iiooiso
230 iioioisso
231 iioioddo
232 iioiodo
233 iioioo
234 iioioio
235 iioioiio
236 iiosiiso
237 iioiosddo
238 iioiosdo
239 iioioso
240 iiososso
241 iiosodddo
242 iiosoddo
243 iiosodo
244 iiosoo
245 iiosoio
246 iiosoiio
247 iiosoiiio
248 iiosodsdo
249 iiosodso
250 iioisddsio
251 iioisddsiio
252 iiosiodddo
253 iiosioddo
254 iiosiodo
255 iiosioo
Is = [1, 2, 3, 4, 5, 6, 7, 8, 9|...],
Progs = [[i, o], [i, i, o], [i, i, i, o], [i, i, s, o], [i, i, s, i|...], [i, i, s|...], [i, i
|...], [i|...], [...|...]|...],
Lens = [2, 3, 4, 4, 5, 6, 7, 6, 5|...],
ProgsLength = 2036,
Codes = [100, 101, 97, 100, 102, 105, 115, 104, 40|...],
CodeLength = 818,
Score = 2854 .

As you can see, the only real state I have here is the output state as a DCG accumulator.
Furthermore, I would like to argue that for something to qualify as an implicit state, it should only be used rarely in the whole program.
The output state qualifies since it is only used in case of the o instruction.
On the other hand, the program list and the internal accumulator is used everywhere and therefore, it is more cumbersome to put it in a state extension syntax rather than using them explicitly.

Here are some other interesting aspect of the implementation:

  • I added a number of optimisations
    • the last instruction of the program should be a o
    • I carry the previous program instruction to avoid sequences like id or di
    • You can’t square a 0 or 1 since it will not change the internal accumulator
    • you can’t output a leading 0
  • The solution is fast :slight_smile: 0.6 seconds to generate all solutions !
  • I generate the optimal set of solutions with a total length for all program of 2036. For that, I took advantage of the fact that you can reset the internal accumulator if you reach -1 or 256.
  • the deadfish/2 predicate is fully pure and can also be used to evaluate deadfish program.
  • the overall score of the solution, counting the size of the code source is 2854 which is not too bad compared to other solutions in the stackoverflow ^^
1 Like

For a different approach in a related language - I gave it a go in answer set programming with Clingo, wrapped by Prolog (if this wasn’t a Prolog forum, it would be better to control it with an embedded Python script instead). Individual calls are fast enough (up to 0.05s), but there is a lot of overhead because I start a new process for each call (Total runtime is about 5 seconds). Shortest programs are guaranteed. The decimal handling code for outputting digits looks horrible, but could be made a lot nicer with embedded Python.

deadfish.lp.pl (1.6 KB)
deadfish_wrap.pl (654 Bytes)

Thank you for sharing your solution.

Was just checking your list:

Seems you are getting shorter solutions, since you compute modulo 256.

The Picat solution has:

40 iisoddddo

But Deadfish is not fully modulo, it has like d(0)=0 and s(17)=289.

Edit 07.10.2023
The golfer rules of the challenge say you can subtract 100 from the
strings length, if the language you chose is Deadfish. So I guess you
can compute Score is ProgsLength + CodeLength - 100.

I can confirm ProgsLength = 2036 on my side now:

?- time(total(S,M,C)).
% 243,050 inferences, 0.031 CPU in 0.050 seconds (62% CPU, 7777600 Lips)
S = 2036,
M = 14,
C = 255.

Backward search and memoization still giving it a speed boost.

I don’t do a simple modulo but follow the original rules where if you hit -1 or 256, you can reset your internal accumulator to 0.

Here is the original quote from the wikipedia page:

Although the comment in the C implementation states /* Make sure x is not greater then [sic] 256 */, the implementation sets the value to zero if and only if value == -1 || value == 256.

I’ve now got my code down to 1.8 seconds for optimal all-solutions, which is similar to what the Picat code does, but without any “planning” (my code is brute-force with the same optimisations that the Picat code has and that @kwon-young mentioned; and also takes advantage of unification to prune the search space). I’ll post my code once I’ve cleaned it up a bit.

My final speedup was based on profiling that showed get_dict/5 using almost half of the CPU time, so I changed the term expansion to use library(record) – that gained nothing (*) until I “macro-expanded” the get/set calls for the record and got an addition 40% performance improvement.

(*) My interpretation is that the main cost of calling get_dict/5 is in the predicate call itself – for small dicts, the performance seems to be similar to unification of a small term – so it might be worthwhile expanding calls to some of the dict predicates into VM instructions.

BTW, library(record) is missing a get/set predicate similar to get_dict/5.

Planning is a big word. You can view the problem as a graph
problem. You have a graph with vertices 1…255 or even more,
and the edges are labeled with “i”, “d” and “s”.

The problem can now be cast as:

  • For each number n in the range 1…255 find the
    shortest path back to zero.

I am pretty sure you can massively bring down the 1.8 seconds
if you look up one of the shortest path algorithms that are around,
and even have some Prolog realization.

If there is also a “o” step then the vertices are pairs if numbes,
the output so far and the accumulator. In an other forum post
for shortest path something like mode directed tabling:

%:- table(path(_, _, _, min)).

Find the shortest path in between nodes
https://swi-prolog.discourse.group/t/find-the-shortest-path-in-between-nodes/2315/4

Was suggested. So SWI-Prolog should have everything, yes or no?
I don’t know why @jamesnvc didn’t stick to tabling and then used
order_by. tabling would sure have the advantage that

it computes all solution together, as is required for this problem.

Edit 07.10.2023
The SWI-Prolog mode directed tabling lists this generic algorithm:

:- table
    connection(_,_,lattice(shortest/3)).

shortest(P1, P2, P):-
    length(P1, L1),
    length(P2, L2),
    (   L1 < L2
    ->  P = P1
    ;   P = P2
    ).

connection(X, Y, [X,Y]) :-
    connection(X, Y).
connection(X, Y, P) :-
    connection(X, Z, P0),
    connection(Z, Y),
    append(P0, [Y], P).

But maybe we can get rid of the append?