CNF/DNF are built in ZDD with almost zero cost

The number of undirected paths which connects (0,0) and (21,21) walking on integer nodes (i, j) ( 0=< i, j =<21 ).

I don’t remember well. I thought they did on some laptop about 10 years ago.

I don’t know about CUDD, and I suspect bugs in my codes, and
doubt a little bit Prolog for ZDD in general. For now I have no idea to investigate these two hypotheses.

my zdd library has a maplist-like map_zdd, for example. If you mean that one of the merit of ZDD is that it allows such familiar (ground) term operations is freely applicable than boole table model. I agree. But not sure as always.

% ?- zdd((X<<pow([1,2,3]), map_zdd(pred([X, Y]:- Y is X^2), X, Y), sets(X, PowX),
%	sets(Y, YinList))).	
%@ X = 4,
%@ Y = 10,
%@ PowX = [[], [1], [1, 2], [1, 2, 3], [1, 3], [2], [2, 3], [3]],
%@ YinList = [[], [1], [1, 4], [1, 4, 9], [1, 9], [4], [4, 9], [9]].

A naive programmar can solve rect(3,3),
An ordinal programmer can solve rect(6, 6)
A smart programmer can solve rect(10, 10),
If you want to solve more then rect(10,10), then you need Algorithm.
This is my interpretation from Minato’s home page. Even if I am in category “smart” luckily, I must admit I lack algorithm. In fact, they are algorithmic group, not software developers. Now I am deeply lost, where to go next, only watching memory usage monitor…

I am not familiar with Shogi software. I have checked a wikipedia on World Computer Shogi Championship, which must be high level conference. I haven’t heard of Prolog Shogi software. According to the recent literature on strong shogi software seems to be influenced by AlphaGo, i.e. deep learning technology.

Still I do not understand what you are doing, sorry, and I do not want to do it. But I borrowed the small part of the DNF in your post, and counted the number of solutions using my DNF library. Perhaps it is nothing relevant to you post. I have to repeat, corresponding to your removing actions your posts in this thread saying there is nothing important here. I am interested in ZDD as a possible new way of programming in Prolog. There must be excellent ZDD implementation in Python e.g. open on the net. My interest in general would be narrow as possible as you could imagine.

data(A):-  A =
  +[ *[a0,-a1,a2,-a3,-a4,-a5,-a6,-a7,-a8,-a9,-a10,-a11,-a12,-a13,-a14,-a15,-a16,-a17,a18,-a19,a20,-a21,a22,-a23,-a24,-a25,-a26,-a27,-a28,-a29,-a30,-a31,-a32,-a33,-a34,-a35],
    *[a0,-a1,-a2,-a3,a4,-a5,-a6,-a7,-a8,-a9,-a10,-a11,-a12,-a13,-a14,-a15,-a16,-a17,a18,-a19,a20,-a21,a22,-a23,-a24,-a25,-a26,-a27,-a28,-a29,-a30,-a31,-a32,-a33,-a34,-a35],
    *[-a0,-a1,-a2,-a3,-a4,-a5,-a6,-a7,-a8,-a9,a10,-a11,-a12,-a13,a14,-a15,a16,-a17,-a18,-a19,-a20,-a21,a22,-a23,-a24,-a25,-a26,-a27,a28,-a29,-a30,-a31,a32,-a33,a34,-a35],
    *[-a0,-a1,-a2,-a3,-a4,-a5,-a6,-a7,-a8,-a9,-a10,-a11,-a12,-a13,a14,-a15,a16,-a17,-a18,-a19,-a20,-a21,-a22,-a23,-a24,-a25,-a26,-a27,-a28,-a29,a30,-a31,a32,-a33,a34,-a35]].

% ?- time((data(A), zdd((dnf(A, B), ltr_card(B, C))))).
%@ % 232,972 inferences, 0.017 CPU in 0.018 seconds (97% CPU, 13585165 Lips)
%@ A = +[*([a0, -a1, a2, -a3, -a4, - ...|...]), *([a0, -a1, -a2, -a3, a4|...]), *([-a0, -a1, -a2, - ...|...]), *([-a0, -a1, - ...|...])],
%@ B = 1854,
%@ C = 4 .

(nothing of importance)

(nothing of importance)

Coincidence ! Yesterday, I wrote codes to select a subset
which satisfies a simple cardinality condition. I wrote two versions. One uses the intesection, which is a basic operaiton.
However the intersection version shows wrong answer. There was a typo in the intersection introduced at some point of code polishing. After the correction, the intersection version works marginally equally to the other. I would like to think that the current ZDD library of mine has already some power which the existing Prolog programming might be difficult to get, though I don’t know how many bugs still remain in the library, and my interest is almost always limited to the rect(W,H) problem.

% ?- time(( N = 100, numlist(1, N, Ns), zdd((X<<pow(Ns), shift0(card_filter_between(50, 51, X, Y)), card(Y, C))))).
%@ % 598,888 inferences, 0.038 CPU in 0.042 seconds (91% CPU, 15645750 Lips)
%@ N = 100,
%@ Ns = [1, 2, 3, 4, 5, 6, 7, 8, 9|...],
%@ X = 101,
%@ Y = 5150,
%@ C = 199804427433372226016001220056 .
card_filter_between(I, J, X, Y, S):-
	card_filter_gte(I, X, Z, S),
	card_filter_lte(J, Z, Y, S).

% ?- time(( N = 100, numlist(1, N, Ns), zdd((X<<pow(Ns), shift0(card_filter_between_by_meet(50, 51, X, Y)), card(Y, C))))).
%@ % 924,870 inferences, 0.060 CPU in 0.064 seconds (94% CPU, 15335522 Lips)
%@ N = 100,
%@ Ns = [1, 2, 3, 4, 5, 6, 7, 8, 9|...],
%@ X = 101,
%@ Y = 7600,
%@ C = 199804427433372226016001220056 .
card_filter_between_by_meet(I, J, X, Y, S):-
	card_filter_gte(I, X, Z, S),
	card_filter_lte(J, X, U, S),
	zdd_meet(Z, U, Y, S).

(nothing of importance)

You are surely getting ZDD stuff fast and correctly, now you are ahead of me. I am a minimalist in the sense that it is all about ZDD for me that ZDD is a set of familiar operations on family of sets of elements of a linear-ordered set. For other things are derived in my mind automatically from this due to the Axiom of Extentionality (= Minato’s zero-suppress rule). For one who is familiar with the standard set theory, documentation is not necessary, because everything (graph, boole table, …) is modeled as a family of sets.

“With zero cost” means that DNF/CNF is almost ZDD except
way of treating complementary pairs in DNF/CNF. Others are talmost same. Honestly, this is a reason why I am reluctant to write document about ZDD, aside that I am lazy.

To be honest, I don’t remember exact definition of Simpath method. I am afraid as always I may be missing something

Yesterday night, I made a slight modification on my codes of path counting to count cycles. Not yet tested, but I works for the rect(w, h) with small w, h. Also it terminates for rect(10, 10) in around 10 minutes. I. enjoyed the time of coding, which is my purpose. I am not familiar with Hamilton graph, I will not challenge such NP-complete problems directly.

This is your current high score, the best you can do?
How much time does it take to compute the above?

Yes, that’s my best. within 10 hours. I forgot call time/1. I strongly hope it is not the Prolog best. Some one like you will break rect(11, 11), I hope.

Thank you for quoting my old post on trying UDG with 13 x 13 nodes, which used shift/reset.
Moreover, it used the rectangular topology of the grid graph explicitly.

On the other hand, the current successful try drops the shift/reset and uses global variables instead, and assumes only given set of unordered links, but preprocessing a “frontier vector” of them, which is easily obtained from the links. In other word, input of the current version is an UDG as set of unordered links.

How about applying your simple method to rect(n, n) for n = 5,6,7,…
In fact, also I tried similar ones at some earlier time before, and I remember that
it suddenly stucked at n = 7 or 8.

If your method passes for n = 7 or 8, it will be simply a big surprise even for D. Knuth. Have you tried on these case ?

He’s gotta finish writing his book first

Thank you for correction of my big mistake. I am very sorry for the mistake. (ChatGPT says:
" He is still alive as of September 2024. He was born on January 10, 1938, and remains an influlential figure in the field of computer science. ")

I remember that I happened to join a welcome lunch at a restaurant in Kamakura
when he was invited to our campus.

Sorry. It is frontier-vector.pl.

Thank you for pointer to the article by Hiroaki Iwashita. I will surely read it soon.
For now, I am going to make a variant of the experimental ‘frontier-vector.pl’ codes
with respect to core date structure on the mate and partial paths. I am not sure if it works better than the current version.

Despite of your advice, I insist on my current hash_terms approach.
though I might see soon that your are right.