Lets say we want to build a little web site. And present a THINKER
that helps us in finding propositional proofs or even FOL proofs,
just like here (not same meaning of heuristic as in A* search):
I just noticed that there is a defect in your Fitch style proof that should be corrected because there is something unsound in the expression of introduction rule:
The trouble is that line 6 does not show that assumption 1 is discharged. It means that a vertical bar should be before assumption 1 and another one before assumption 2, because these two assumptions are discharged. We should see:
| d(a) [assume]
| | [assume]
| |
| |
| ā discharge of assumption 2.
d(a) ā ā¦ [imp(1-5)] ā discharge of assumption 1: block 1-5 is a sequent from conditional 6 is inferredā¦
Even if flags are not used, but only vertical bars, discharges must be expressed clearly, that is one of the most obvious merits of Fitchās style, in comparison of Gentzenās. You can check my claim in reading Richard Kayeās excellent book, The Mathematics of Logic, CUP, 2007. This point also explains the distinction in carnap.io between Premiss (notation " :PR ") and Assumption (notation " :AS " ): the latter is discharged, the former not : https://forallx.openlogicproject.org/html/Ch17.html#S4
Does exist a book where lambda calculus and related topics are explained in a simple and pedagogical way? As soon as I see the symbolism of lambda calculus, I do not even try to understand because it is too much abstract for meā¦
Usually I like learning things from books, but for lambda calculus, I found that a few university lectures really helped (it was part of a ātheory of computationā course). Perhaps there are some videos that would help ā¦
Thereās another layer beyond the ālambda calculus for dummiesā, which gets into model-theoretic stuff. Iāve taken a grad-level course in it, taught by a brilliant (and rather eccentric) guy, and sadly Iāve retained almost none if it, because it mostly hasnāt been needed in my day-to-day work.
But one thing I remember, after about the 5th lecture on āfixed-point theoremsā (there are quite a few of them), most of which was going right over my head ā¦ I had a flash of insight that this particular proof was simply proving that the notation we were using for writing recursive functions was guaranteed to represent a real function ā that is, that everything I could write using this notation actually represented an actual description of a computation and would never be meaningless marks on a piece of paper. Thatās a very important theoretical result (there are lots of āincompletenessā theorems, where this kind of thing doesnāt hold) but once itās proved, it has no effect on day-to-day programming.
Responding to a deleted post (āYou made that up, right? Who was the teacher?ā):
My teacher was Akira Kanda, but I couldnāt find his course notes online (and I long ago lost my copy, and have forgotten most of the course except for how hard it was): dblp: Akira Kanda
I vaguely remember some of this: Typed Recursion Theorems