Thanks, I am learning. It will be the occasion of seeing how implementing iterative deepening. If I am not mistaken, Otten uses it for his leanseq_v5.pl, that is his prover for 1st order logic.
I add that I just tested your Prolog rule for => elimination in the following format in interative deepening, if I am not mistaken, this code is leanseq_v5.pl from Otten’s work:
The fact that a propositional prover is incomplete vis-à-vis FOL matrices is not vey worrying, in my opinion, because syntactically, FOL matrices belong to the first order predicate calculus, not to propositional calculus. The examples you gave are correctly treated by Jens Otten’s leanseq_v5.pl :
The order of rules in Zucker-Otten’s leanseq_v2.pl is not efficient. Non-branching rules must be before the branching ones. If you compare this result with the result obtained by a fork of this prover, you get a better sequent proof.
Best wishes, an happy new year
That’s interesting, thanks.
From a logical point of view, if I understand correctly, the difference is that the Weakening rules are built in rules L& and Rv (in your LK), and there are built-in in the Axiom or Init rule in Otten’s program.
Thanks, again, it is very interesting and I will try to follow your code to get a genuine LK prover.
Did you test the efficiency of your prover?
My fork succeds in proving Pelletier problem 17 i.e. the following classical theorem :
but I checked the parenthesis balance with emacs and it is correct. On my website, the same formula is unprovable probably because of time limitations. But it is provable locally with my fork via emacs:
% 54,890,488 inferences, 4.483 CPU in 4.487 seconds (100% CPU, 12244554 Lips)
I do not paste here the output Prolog proof, it is too big.
EDIT:
I have tested my zdd based refutation prover for the Pelletier problem 17 with N = 15, which took time about one hour. I have little knowledge to say about LK, and perhaps have no time to learn it. Anyway thank you for showing power of LK prover.
Thanks for your note.
First, the rules for the connectives are LK’s, but it is not a genuine LK prover, because of the lack of structural rules: Weakening for example is built-in the Axiom. More, there is no refutation rule in LK, of course.
Second, the efficiency of this Prolog classical prover is based on two changes I made starting from the fork of Jens Otten’s leanseq_v2.pl made by Philip Zucker on his personal blog (see the link above in this discussion):
from Zucker’s fork. It is well known that append is a cause of inefficiency .
That’s it.
Now, it is maybe possible both to preserve the efficiency of this prover and to transform it into a genuine LK prover thank to @j4n_bur53’s fork. . But I haven’t taken the time to try it yet.
Thanks for this suggestion. I began to try to adapt leanseq_v5.pl, but I met an error with Tau Prolog, while all was correct with SWI. I am always impressed by your quickness. Unfortunately, I have to grade students now, and I am not available during next days.
Thanks again !
Suspecting that my refutation prover for the Pelletier problem takes time for converting given formula into a set of clauses for refutation method, I added four lines to the prover for use of hash table memo from curiosity. The effect is dramatic ! Now it takes only less than a minuet even for N =100.
EDIT:
I do not intend to make noise for your current work, but rather have strong interest in LK/LJ/NK/NJ. Once I was a little bit close to logic community in Japan, and had opportunities to listen to their lectures on cut elimination and proof normalization etc, though almost nothing remains in my active memory but respect to such logic people.
EDIT
“.” and “+” signs below in the log are printed each time of maching hashed keys, which supports my suspicion above.
Thank you for references. “Less than 1 mili seconds” is amazing.
However it sounds reasonable for me, because
I checked that almost 100% of the total time for my case is spent for preprocessing, and the time for general refutation procedure proper is less than about 0.2 seconds, i.e. negligible compared with the preprocessing.
%@ % 1,694,993 inferences, 0.173 CPU in 0.173 seconds (100% CPU, 9792496 Lips)
%@ VALID
%@ % 491,556,942 inferences, 44.378 CPU in 46.291 seconds (96% CPU, 11076523 Lips)
Thanks for the codes. It would be useful when zdd over the boolean ring is considered. So far my zdd works only over the totally ordered atoms or the boolean algebra. Fortunately behavior of the latter is close to the other as you would see the codes (library(pac/zdd.pl)) by way of selecting a natural ordering of the literals. I am not far from sure that such extended zdd would be useful for SAT problem. I am pessimistic for such extensions without confidence, but added it in my future plan.
Hello. Again the order of rules in Zucker-Otten’s prolog prover explains this result. My fork provides a non-branching proof, while yours adds interestingly what is “vacuous discharge” in Natural Deduction (ND). I never saw “vacuous discharge” in Sequent Calculus (SC), because its corresponds to useless Weakening on the left. Jan von Plato and Sara Negri explained translations between ND and SC in their excellent book. Last point, your axiom in your new fork is clearly non standard. Have a nice Sunday .
I never said that any Weakening is useless, but in the example you gave, it is a useless Weakening: if there is a theorem on the right of the turnstile of a sequent, any formula on the left is useless to prove such a sequent.
I looked at your code and I am unable to imitate it to close this gap. There is no Prolog output in your fork and I do not get neither latex(P,V) nor [] instead of Proof in the query. Your explanations are not enough developped for me, maybe for others, but not for me.