Neurosymbolic AI

Interesting thoughts! I guess discussing the merits of symbolic vs sub symbolic (connectionism) AI mostly makes sense when considering specific applications. For (auditory and visual) object recognition the sub symbolic approach clearly has an edge, sometimes even outperforming humans.

In my research I use Prolog to represent scientific theories where an “expert” states knowledge in a top down fashion (in my next project i want to use Inductive Logic Programming for inducing scientific theories from empirical data). When I discuss my logic programming approach to theory representation with colleagues i frequently get comments of the type “but isn’t this an old approach; what about deep learning”, and this really annoys me. I have some experience with neural networks and support vector machines but there are some things that i think would be impossible (or really convoluted) to do with these schemes, and where a symbolic approach has an advantage.

  • For example, how would one encode Darwin’s theory of natural selection as a neural network? This is very natural in a logic program.

  • And what about something like Russel and Norvigs internet shopping world (chapter 12). To me at least, it makes no sense at all to try to represent this in some kind of neural network.

When representing scientific theories as logic programs I have had interesting experiences several times, where i got the (subjective) feeling that the code really was showing intelligent creativity. In these cases theories have generated conclusions that I did not anticipate at all, and that i initially questioned, but which then turned out to make perfect sense, and which sometimes were also validated in empirical studies that i did not know of initially.

Hi all,

I stumbled across this thread while trying to figure out where to post this here (This is my first post here, so please excuse me if I might have violated any rules for posting content here). I’ve spent the last year or so developing an experimental neurosymbolic AI system called DeepClause. I’ve just released it on GitHub: GitHub - deepclause/deepclause-desktop: DeepClause Desktop App

The core of DeepClause (currently distributed as an Electron app) is based on a Prolog-based DSL which may combine LLM calls with regular Prolog calls and can be used to to encode AI agent behaviors as executable logic programs. The ultimate goal of this project is to allow users to build “accountable agents”. These are systems that are not only contextually aware (LLMs) and goal-oriented (Agents), but also logically sound (Prolog), introspectively explainable, and operationally safe.

I am posting this here in the hopes of getting some feedback and comments on the project, especially from those working at the intersection of LLMs, logic programming, neurosymbolic AI etc.

Also, I would like to express a big big thanks to Jan Wielemaker and all the other contributors of SWI Prolog. Without SWI, and especially the WASM module, it would not have been possible to build this.

Looking forward to interesting discussions!

(In case you want to to try it out: the Mac OS builds are currently broken due to the code signing requirements from Apple, but building from source should work).

2 Likes

Sounds really interesting!

In my experience it just means you need to be a little persistent to get it installed. In any case, this should eventually be resolved. It is pretty hard though, so it would be great if someone with experience dealing with code signing and CMake based packaging steps forward. As I understand, we probably also need to split the system into a framework and the application. I have no experience at all on how to do this.

Thanks, Jan!

As the app is Electron-based and uses SWI Prolog through WASM I think it should run ok, once I actually get an Apple Developer ID…

We have achieved a neuro-symbolic AI integration through a bridging protocol, which has been applied in the medical field. This approach extends both the expressive form and capability of Prolog, enabling it to function not only as a standalone symbolic AI implementation generalizable across various domains, but also as a bridge to neural networks (e.g., LLMs). A comprehensive integration methodology—rather than mere API calls—will be systematically introduced in our upcoming releases. We welcome your valuable feedback. We’d love for you to join us and help build this together! Paper: [link], GitHub: [link]

1 Like

This seems quite interesting @apfadler. Would it be possible to use this as a swipl library directly or is the electron wrapper the only way?

Hi @meditans , yes it would be possible, but it would require some work, since the system in its current form delegates everything LLM and user input related back to the calling javascript code.

The DML “compilation” and execution happens inside a prolog engine. During the “compilation” all @-predicates that need some LLM calls get substituted with stubs that use engine_yield so that the calling Javascript code can then run the LLM call(s) and pass the results back using engine_post. This is also used to capture input from a user and other things.

If you want to use it without Electron, but still be ok with using Javascript/Node, then you could try right away and just take the current source code and start hacking it (look for runDmlAsync()).

Hi all,

Just wanted to give a short update on the state of my project: I’ve decided to stop with the electron app and also simplify the DSL and streamline its implementation. The result is the new Typescript SDK and CLI tool: GitHub - deepclause/deepclause-sdk: DeepClause Typescript DML SDK and runtime

The system is more focused on creating and describing “Skills” now, that can be used as small standalone tools or inside a coding agent.

The general workflow is:

  1. Create a markdown document describing your tool/skill
  2. “Compile” it into the Prolog DSL
  3. Run it through the runtime

Example:


cat > api-client.md << 'EOF'
# API Client Generator
Generate a TypeScript API client from an OpenAPI spec URL.

## Arguments
- SpecUrl: URL to an OpenAPI/Swagger JSON specification

## Behavior
- Fetch the OpenAPI spec from SpecUrl
- Extract endpoints and types
- Generate typed client code
- Write to output file
EOF

# Compile it once
deepclause compile api-client.md

# Run it 
deepclause run api-client.dml "https://api.example.com/openapi.json"

This will then create a DML/Prolog file like this:


agent_main(SpecUrl) :-
    system("You are an API client generator..."),
    exec(web_fetch(url: Url), Spec),
    task("Extract endpoints from: {Spec}", Endpoints),
    task("Generate TypeScript client for: {Endpoints}", Code),
    exec(vm_exec(command: "cat > client.ts"), Code),
    answer("Generated client.ts").

(See the github page for how to configure LLM settings)

At the core of the simplified DSL is the task/N predicate: it takes a prompt as input and runs a small agentic loop. If the agent succeeds, the predicate will succeed otherwise it fails. Any memory that is created by the LLM/agent loop is stored in memory and will be tracked during execution by a meta-interpreter. Upon failure and subsequent backtracking, the memory gets reset to the last choice point’s memory.

Note here that tools (that are called automatically inside the agentic loop) can be defined using the tool/2 wrapper. The exec predicate then hands control back to the runtime which executes the actual tool call defined in JS/Typescript (The meta interpreter hands control back to JS, the tool gets executed there and results passed back to the meta-interpreter).

To integrate the task predicate with “normal” code, it can output Variables, e.g.:

task("generate a random number and store it in Output", Output),
Output > 42,
output("Number is greater than 42").

For type safety one can e.g. use: task(“generate a random number and store it in Output”, integer(Output))

I hope this might be of interest to some of you and I would love to get some feedback or comments!

A short observation on Prolog and the current generation of LLMs:
While I wrote the core of the original version mostly by myself (the prolog parts, not the JS parts), the current version is a complete rewrite using Opus 4.5. While older models still struggled with Prolog, I have to admit that (as a Prolog amateur) Opus 4.5 appears to produce very nice and working Prolog code. It think would be very interesting to hear how some of the more experienced Prolog users in this forum judge the quality of Opus generated Prolog code.

2 Likes