# Blog Post on Harrop Clauses and Scoped Dynamic Databases

I’ve been tinkering with embedding lambda prolog like capabilities in regular prolog and wrote up some bad alleys I went down and promising techniques. Interested to hear about other suggestions or comments. Thanks!

Blog Post

3 Likes

I don’t tend to think of lambda prolog as being persay in the same arena as Coq/Agda/Lean, although it is a possible perspective. It is a programming language like prolog, just with support for some interesting logically motivated features. Now, since these features are inspired “computation=proof search” and improve the treatment of binding forms, it can make lambda prolog a more suitable place to do interactive or automatic theorem proving like activities than regular prolog. See https://www.site.uottawa.ca/~afelty/dist/lprolog97.pdf

What my blog post is trying to do is understand if it is possible to get the meat of the extra things lambda prolog brings to the table as a library in prolog.

The main application I have in mind is typeclass resolution with quantified constraints, and vaguely theorem proving. I don’t think my framework being intuitionistic is a blocker for these. I also kind of like intuitionistic logic. I’m largely following the lead of Miller and Nadathur’s book anyway and not trying to be all that innovative in terms of the system I am implementing.

I hadn’t really considered myself to be implementing a theorem prover, just operational-ish mechanisms for better scoped dynamic database and scoped fresh constants. If I were implementing a classical theorem prover, I’d follow these notes How to Build an Automated Theorem Prover | Tutorial at TABLEAUX 2019 which now looking at them again, I see the similarity and this deserves more thought.

The drinker’s paradox is an interesting example because the extrude_check blocks it. The goal is
ex x, D(x) → forall y, D(y). Under the proof search reading I am following, first we introduce a logical variable X when we traverse `ex`, then we scoped assert D(X), then we gensym a fresh y, then we try to resolve D(y) against the database. It unifies X with y, so retroactively, we actually asserted D(y) into the database at the implication step. Great. But now when exiting the `ex` binder it checks if the term X contains any fresh variables it shouldn’t as the extrude check. X is indeed y now which was not in scope at the time of traversing the `ex` binder, so this check fails. Hence the goal of the drinker’s paradox fails.

This may be an issue with the yall library not capturing the X variable in the manner intended. After some printf debugging, this goal fails and I can see that it is indeed attempting to unify the X to the fresh out of scope constant
`right([],[],ex([X,Q]>>(Q=all({X} / [Y,P]>>(P=impl(+d(X),+d(Y))))))).`

I have not had much luck with naively relying on the library to do things as I expect them.

But I think it needs to use copy_term/4 to keep the X variable shared. Or something like that. The scoping issues of variables are a real mess and are kind of the source of the biggest problems of trying to build this prover.

Edit: Oh I see. Learned a new trick. Yes, copy_term/2 is what I want. But I do think the `{} /` capturing annotations are necessary

Edit 2: Ah, but I can change my interpreter where the user doesn’t have to do this error prone capturing annotation thing. Interesting. This has been a helpful discussion

Fantastic! Any opinions about making the parameters P and S implicit by using library(intercept)? I hoped to kind of make the whole thing as a shallow as possible (make `all` `->` and `ex` normal prolog predicates that query the context, rather than obviously interpreted)

Yes, I hadn’t seen undo/1 That and transaction/1 or snapshot/1 mentioned in the docs do seem possibly useful. Something like `impl(D,G) :- assert(D), undo(retract(D)), G, retract(D), undo(assert(D))` maybe. To share variables with the asserted clauses, perhaps S needs to be accessed via intercept/4 (which does have an extra argument for sharing variables)

Wait, in your code, is putting the current context into the skolem + unify_with_occurs_check achievng the extrude check?

It was already discovered in 1988 by Melvin Fitting, or even before.
But I don’t know whether he was aware that it can be also used for
classical FOL. It rather seems to me it was then a solution to step

outside of classical FOL, i.e. towards modal logic. But this was only
my first impression. In the text he makes it actually clear that he
wants to move to unification to have more “guidance” in the search:

But that he used it for modal logic gives me somehow confidence that it
is a method that works for a wide range of logics. Namely also for Harrop
formulas etc… that do not allow the Drinker Paradox. He also mentions

some semantical problems when thinking about Skolem Functions in
non-classical logics, which then led him to the revised δ. Although if
one thinks about Skolem Functions, they often come also with

parameters, and these parameters come from the opposite quantifiers.
Depending on the viewpoint, whether the prover is supposed to do a falsification
or a verification, the terms could be also called Herbrand functions.

The screenshot is from here:

First-Order Modal Logic
Fitting, Melvin Chris & Mendelsohn, Richard L. (1988).
https://id144254.securedata.net/melvinfitting/bookspapers/pdf/papers/FOMTFitting.pdf

The term Herbrand function is found here:

On Herbrand’s Theorem
Samuel R. Buss
Herbrand's theorem

You may have a look at work from the past couple of years by David Warren on XSB Prolog.

Why would that be relevant? I’m pretty sure XSB has a notion of an object that represents a set of Prolog clauses. I’m not sure of its status, but I’ve seen a talk by David on that. Should be possible to find. I think it was at the latest New York ICLP.

SWI-Prolog has a working undo/1, It is probably possible to define the intended behavior (not sure what exactly) based on what is there. Part of that is probably defining an alternative API on top of the transaction logic.

“Embedded implication” and “hypothetical reasoning” are both good key phrases.

This recent paper seems very similar to our discussion. They actually do use SWI. They show a related metainterpreter, but I do not understand their shallow encoding. It might involve significant source to source transformation. Or I think they are suggesting explicitly passing the shared variables to the asserted clauses as a last argument. This is very reminiscent of a closure or lambda lifting transformation.

1 Like

Again, fantastic work! My best understanding that maybe the delimited continuation version was too clever by half based on some comments of @jan and that intercept/4 was better behaved if it works for your use case. intercept/4 is something like a single shot delimited continuation, so it has to return the contextual databae once and then member/2 installed on the hypo predicate is the thing that does the nondeterminism rather than returning from the shift multiple times. I would be surprised if the delimited continuation version is faster than just using member/2 and it is certainly more confusing. On the other hand, what you have presented is very elegant. But what happens for example if you don’t find the fact you want in your db? How does this shift chain end?

Another thing about the intercept/4 version is that the database list could eventually be swapped out for a better data structure easily. Maybe tries SWI-Prolog -- Manual ? Although probably needs to be a pure functional data structure. rbtree? dict? The delim continuation version I think has no choice except for basically naive linear search.

I haven’t investigated Bousi-prolog in any depth. I only found that paper after you pointed out that “embedded implication” is a keyword to search for

Seems they got back to the original publication? Making it a callable term in case of a continuation was a later (IMO) improvement. 0 for no-continuation seems weird, but it is a non-callable term.