Lets say we want to build a little web site. And present a THINKER
that helps us in finding propositional proofs or even FOL proofs,
just like here (not same meaning of heuristic as in A* search):

I just noticed that there is a defect in your Fitch style proof that should be corrected because there is something unsound in the expression of introduction rule:

The trouble is that line 6 does not show that assumption 1 is discharged. It means that a vertical bar should be before assumption 1 and another one before assumption 2, because these two assumptions are discharged. We should see:

| d(a) [assume]

| | [assume]

| |

| |

| â discharge of assumption 2.

d(a) â âŠ [imp(1-5)] â discharge of assumption 1: block 1-5 is a sequent from conditional 6 is inferredâŠ

Even if flags are not used, but only vertical bars, discharges must be expressed clearly, that is one of the most obvious merits of Fitchâs style, in comparison of Gentzenâs. You can check my claim in reading Richard Kayeâs excellent book, The Mathematics of Logic, CUP, 2007. This point also explains the distinction in carnap.io between Premiss (notation " :PR ") and Assumption (notation " :AS " ): the latter is discharged, the former not : https://forallx.openlogicproject.org/html/Ch17.html#S4

Does exist a book where lambda calculus and related topics are explained in a simple and pedagogical way? As soon as I see the symbolism of lambda calculus, I do not even try to understand because it is too much abstract for meâŠ

Usually I like learning things from books, but for lambda calculus, I found that a few university lectures really helped (it was part of a âtheory of computationâ course). Perhaps there are some videos that would help âŠ

Thereâs another layer beyond the âlambda calculus for dummiesâ, which gets into model-theoretic stuff. Iâve taken a grad-level course in it, taught by a brilliant (and rather eccentric) guy, and sadly Iâve retained almost none if it, because it mostly hasnât been needed in my day-to-day work.

But one thing I remember, after about the 5th lecture on âfixed-point theoremsâ (there are quite a few of them), most of which was going right over my head âŠ I had a flash of insight that this particular proof was simply proving that the notation we were using for writing recursive functions was guaranteed to represent a real function â that is, that everything I could write using this notation actually represented an actual description of a computation and would never be meaningless marks on a piece of paper. Thatâs a very important theoretical result (there are lots of âincompletenessâ theorems, where this kind of thing doesnât hold) but once itâs proved, it has no effect on day-to-day programming.

Responding to a deleted post (âYou made that up, right? Who was the teacher?â):

My teacher was Akira Kanda, but I couldnât find his course notes online (and I long ago lost my copy, and have forgotten most of the course except for how hard it was): dblp: Akira Kanda
I vaguely remember some of this: Typed Recursion Theorems