Interesting thoughts! I guess discussing the merits of symbolic vs sub symbolic (connectionism) AI mostly makes sense when considering specific applications. For (auditory and visual) object recognition the sub symbolic approach clearly has an edge, sometimes even outperforming humans.
In my research I use Prolog to represent scientific theories where an “expert” states knowledge in a top down fashion (in my next project i want to use Inductive Logic Programming for inducing scientific theories from empirical data). When I discuss my logic programming approach to theory representation with colleagues i frequently get comments of the type “but isn’t this an old approach; what about deep learning”, and this really annoys me. I have some experience with neural networks and support vector machines but there are some things that i think would be impossible (or really convoluted) to do with these schemes, and where a symbolic approach has an advantage.
For example, how would one encode Darwin’s theory of natural selection as a neural network? This is very natural in a logic program.
And what about something like Russel and Norvigs internet shopping world (chapter 12). To me at least, it makes no sense at all to try to represent this in some kind of neural network.
When representing scientific theories as logic programs I have had interesting experiences several times, where i got the (subjective) feeling that the code really was showing intelligent creativity. In these cases theories have generated conclusions that I did not anticipate at all, and that i initially questioned, but which then turned out to make perfect sense, and which sometimes were also validated in empirical studies that i did not know of initially.
I stumbled across this thread while trying to figure out where to post this here (This is my first post here, so please excuse me if I might have violated any rules for posting content here). I’ve spent the last year or so developing an experimental neurosymbolic AI system called DeepClause. I’ve just released it on GitHub: GitHub - deepclause/deepclause-desktop: DeepClause Desktop App
The core of DeepClause (currently distributed as an Electron app) is based on a Prolog-based DSL which may combine LLM calls with regular Prolog calls and can be used to to encode AI agent behaviors as executable logic programs. The ultimate goal of this project is to allow users to build “accountable agents”. These are systems that are not only contextually aware (LLMs) and goal-oriented (Agents), but also logically sound (Prolog), introspectively explainable, and operationally safe.
I am posting this here in the hopes of getting some feedback and comments on the project, especially from those working at the intersection of LLMs, logic programming, neurosymbolic AI etc.
Also, I would like to express a big big thanks to Jan Wielemaker and all the other contributors of SWI Prolog. Without SWI, and especially the WASM module, it would not have been possible to build this.
Looking forward to interesting discussions!
(In case you want to to try it out: the Mac OS builds are currently broken due to the code signing requirements from Apple, but building from source should work).
In my experience it just means you need to be a little persistent to get it installed. In any case, this should eventually be resolved. It is pretty hard though, so it would be great if someone with experience dealing with code signing and CMake based packaging steps forward. As I understand, we probably also need to split the system into a framework and the application. I have no experience at all on how to do this.
We have achieved a neuro-symbolic AI integration through a bridging protocol, which has been applied in the medical field. This approach extends both the expressive form and capability of Prolog, enabling it to function not only as a standalone symbolic AI implementation generalizable across various domains, but also as a bridge to neural networks (e.g., LLMs). A comprehensive integration methodology—rather than mere API calls—will be systematically introduced in our upcoming releases. We welcome your valuable feedback. We’d love for you to join us and help build this together! Paper: [link], GitHub: [link]
Hi @meditans , yes it would be possible, but it would require some work, since the system in its current form delegates everything LLM and user input related back to the calling javascript code.
The DML “compilation” and execution happens inside a prolog engine. During the “compilation” all @-predicates that need some LLM calls get substituted with stubs that use engine_yield so that the calling Javascript code can then run the LLM call(s) and pass the results back using engine_post. This is also used to capture input from a user and other things.
If you want to use it without Electron, but still be ok with using Javascript/Node, then you could try right away and just take the current source code and start hacking it (look for runDmlAsync()).