PeTTa directly translates MeTTa ( https://metta-lang.dev/ ) functions into Prolog clauses and enables efficient execution of MeTTa code, preserving the strong performance characteristics of SWI-Prolog in the translation, and allowing overhead-free bi-directional interop with Prolog code.
This demonstrates that a logic programming language can easily host a fully featured, and even extended, functional language while preserving logic programming capabilities and high performance. By contrast, embedding logic programming into a functional language has historically produced slow, toy-like systems (such as Kanren variants), often with weak integration into the host language. Crucially, functional runtimes lack the deep execution, indexing, and search optimizations needed for efficient logic programming, and reproducing them would be an immense undertaking. (Thank you @jan and others for your decades of work on SWI-Prolog, it is king!)
Also, PeTTa is now actively used by Peter and me at Life-like AI (https://life-like.ai) for real-time neuro-symbolic integration and empirical reasoning, including mobile robotics applications that leverage our NARS-based reasoning technology. Working with deep neural networks and symbolic mechanisms using the same language, and elegantly, has never been easier, integrating PyTorch, OpenAI LLM/VLM API, vector databases (FAISS), and scalable symbolic atom spaces (MORK) in addition to the great predicate store of Swipl itself. Have fun with PeTTa, and please check out the examples: PeTTa/examples at main · trueagi-io/PeTTa · GitHub
Very nice! I’ll have to port the Metta-WAM LSP to run on PeTTa ; it will presumably be a bit easier for other MeTTa devs to make use of if it can be run a bit more independently!
Do you have some data on the efficiency compared to the native version?
Not sure it is feasible, but would it be possible to create a SWI-Prolog pack? That provides permanent visibility as well as very simple installation. Let me know if you need help.
We carried out a detailed performance analysis a few weeks ago, and will soon link replicable benchmarks in the repository Wiki. On average, PeTTa is at least ~500× faster than the Hyperon-Experimental Rust interpreter, which is a classical AST-walking interpreter and lacks most optimizations.
I hope this is not a charged question but I worked with @logicmoo on Mettalog, a Metta-to-Prolog transpiler for Singularity Net, back in December 24 to February 25. Is this a daughter project, a completely separate project, something else?
My recollection is that further development of Mettalog was stopped with only occasional maintenance, even though, to my understanding, it was the most mature and performant of the many Metta runtimes under development (or just in planning) internally.
I hope this is not asking questions that cannot be answered in a public forum. If so, my apologies!
hi Stassa, as far as i understand is P(M)etta is an entire new implementation of Metta in swi-prolog separate from the other swi-prolog ( mettalog ) implementation . The P in Petta is from Patrick I believe who made the source
@drspro: Indeed. And officially PeTTa=“Prolog Type Talk” to be close to MeTTa=“Meta Type Talk”.
@stassa.p: No worries at all, nothing charged in your question.
Douglas (@logicmoo) is a friend and talented colleague. When I pushed the first working PeTTa proof-of-concept, I reached out to him to discuss interesting technical aspects. I also made clear PeTTa wasn’t meant to undercut Mettalog efforts (which started out as an interpreter), but to pursue an as-direct-as-possible MeTTa→Prolog source-to-source translation approach with significantly higher resulting code execution performance. We’ve had a productive exchange, and he’s been supportive of PeTTa’s open-source release.
For context, I started PeTTa because it became increasingly clear to me how MeTTa can be mapped to Prolog, and prior to starting work on PeTTa I already had a detailed picture of what it will take. Also, I wouldn’t have started PeTTa if Mettalog had provided similar reliability and execution performance, as my focus is in AI and not in programming languages.