Scheme to compose multiple Prolog extensions

Hello,

In my view, the unique strength of SWI-Prolog as compared to other Prolog engines is the unmatched richness of its libraries, packages and add-ons to extend its inference for almost very many Logic Programming (LP) extensions: Constraint LP (CLP), Inductive LP (ILP), Probabilistic LP (PLP), Answer Set Programming (ASP), Object-Oriented LP (OOLP), Service-Oriented Web LP (SOWLP), you name it. SWI seems to have reached the critical mass of such ecosystem to have become the “de facto” standard in the absence of a genuine standard onto which the LP community has decided to not invest time specifying.

However, such an impressive line-up would be all the most practically useful to write AI applications if they would seamlessly work together … which is alas far from being the case.

Many of them are meta-programmed using added directives and term expansions which … very often conflict. For example case in point, I would like to use Logtalk’s nice high-level architectural structuring of a complex app into taxonomies of objects with built-in inheritance down the specialization hierarchy, but inside my objects I would like to be able to defined predicates not merely in Vanilla Prolog but also in PLP using CPLint’s LPAD or HPLP, and/or using s(CASP) constrained ASP. I would also like to leverage the declarative and relational nature of CLP(X) in a PLP rather than having to write arithmetic calculations imperatively using Prolog’s is, etc.

But I can’t because the directives extending Vanilla Prolog into PLP, OOLP, CASP, etc. are not composable and conflict among each other at transpiling time. Or when they do not then the respective term expansions used by each transpiler confict with each other.

I understand that all those great projects have been developed independently with a single extension in mind. But I can’t stop wondering whether we could not define some generic, extension independent composition API for SWI-Prolog’s extensions based on extension-specific directives and term expansions that would allow to compose multiple extensions easily leveraging perhaps some self-descriptive metadata?
The meta-data might be a DCG, perhaps? After all Prolog provide great built-in language engineering tools.

What do you think? Is this hopeless wishful thinking or a new feature that would multiply the practical utility of the uber-rich extension ecosystem of SWI?

1 Like

This is a very good topic. Part of the problem is SWI-Prolog’s old term/goal expansion mechanism. Some systems are more advanced in this respect. Once upon a time I had the hope the Prolog community would come up with a (informal) standard on this, but this seems unlikely. I’ve started work on designing a new system with the intend to learn from all existing and remain 99% compatible. The design of that is mostly done, but implementing is a major task. The biggest problem being the translation of source-position information to drive the IDE tools.

But then, even with that solved (and solving the expansion is possible), it is unclear how much of the problem you sketch is solved.

Ciao Prolog is probably the most advanced in composing program transformations.

3 Likes

Has it been investigated how much these theories are compatible? I always supposed you could try using one or two together, but the performance and correctness when things interweave would not be something you could count on. Or so I assumed.

Regarding CLP+ cplint, the problem is the interaction with tabling: Jan, does CLP work also with tabling?

Without understanding what is going on … could a loosely coupled “blackboard” architecture, serve as integration glue?

In the “worst” case each tool contributes via engines, and one “engine” that rules serves all is at the center.

Its not optimal, i guess, since multi-paradigm is not achieved – just side-by-side paradigm …

No. This might be one of the few things other Prolog’s can do and SWI-Prolog can’t. It is rumored to be not very hard. It waits for a (paid) project where this is required.

I am not a prolog expert, but IMO combining different logic programming features is hard. Note that what we really want is not to combine two specific features, but be able to develop different language features independently and combine them in a elegant way.

Imagine that before Moggi’s work on monads, each FP language devised its own effects, and combined different effects require special treatment by the language designers. The current state of LP language is similar to that of FP. It’s still in the stone age and waiting for a general framework like monad.

BTW, IDE tooling is another productivity block, especially for debugging CLP(X) programs. Following constraint goal suspensions, domain reductions, resumes is though without help from an IDE. One intriguing option would be a Visual Studio Code (lean, fast, free, decently documented, massively adopted) extensions using the seemingly modular and scalable Language Server Approach where one can reuse the builtin tools of the language to support it on a server. It seems to require communicating with the TypeScript VSCode client through JSON-RPC. I do not sufficiently understand pengine to figure out whether pengine_rpc/3 supports such communication out-of-the-box.

It is supposed to help with multi-language code files that are the norm for language extension (e.g., Prolog code snipets mixed with Logtak code or LPAD code)

I see the current Explainable AI trend as an opportunity for a LP revival. But the simplicity of SLDNF reasoning which makes Prolog programs easy to understand and debug is immediately lost with most practical LP extensions. They either compile the intentional representations into optimized data structures that make the reasoning hard to follow and the debugging almost impossible (e.g., PLP compilers to BDD, arithmetic cicruits, ASP compilers to grounded program sets) and/or follow a complex algorithms that is no longer depth-first top-to-bottom left-to-right (e.g., precisely tabled CLP).

Until appropriate tooling becomes available to help a programmer follow the underlying reasoning of the engine in terms of the high-level concepts that (s)he use to program, those very powerful symbolic engines are in practice dark grey boxes not white boxes, and so the argument for machine learning them rather than neural nets is considerably weakened.

Sorry for the half-random rambling :slight_smile:

Something of that sort already exists :slight_smile: see @jamesnvc’s LSP server.
For IDE debugger support, one would also need my Debug Adapter Protocol (DAP) server, though there’s no VS Code support yet as I don’t use this editor myself (see this GH issue). Both work quite well in my experience with GNU Emacs’ lsp-mode and dap-mode extensions, respectively.

This is a good point.

Most Probabilistic LP extensions have a well-defined model-theoretic semantics as a probability distribution over well-founded or stabe models. The declarative semantics of CLP has been the object of many publications, as does that of OOLP languages such as Flora and LOGIN (for the elderly and LP history buffs out there who may have heard of those), but is it not yet defined for Logtalk, AFAIK (but I do not know much) neither by extending Clark’s completion, nor by extending well-founded or stable models.

You are right that a seamless code and operational semantics integration should be guided by a prior work on declarative semantics integration.

I think I do not agree :slight_smile: The value of a declarative model is that humans can read and validate it against other knowledge they have about the world. A machine learning model (as it is commonly understood now) is basically a set of numbers for which it is very hard to judge for humans what it means. There is little choice but applying it to data and see how it performs.

Procedural specifications are for humans (well, the rare subspecies of programmers :slight_smile: ) fairly easy do debug. Once the possible input values become less easy to create test cases for and/or no good test cases exist, it gets pretty hard to validate a procedural specification. A concise declarative specification is easier to understand. Of course, we must trust the reasoner.

Debugging declarative specifications is different. You typically can’t “step through the execution”. Depending on the formalism and solver there are often alternatives. They do need to be learned.