Thanks for the shout out
The ARC stuff in vision_thing will probably stay a draft. I was initially interested in ARC because it’s a set of logic puzzles with very few examples that seemd like a perfect match for ILP approaches and Louise in particular. But it turns out it’s possible to apply some simple augmentations, like colour changes, rotations and translations, and make a big dataset that can then be used to solve about 25% of the training tasks without anything like reasoning (just with a deep neural net). So I mostly gave up on ARC.
On the other hand, I’m working on a new project for vision_thing, where the idea is to learn L-systems, i.e. grammars that describe plants and some fractals. That is cheating a little, because ILP algorithms are very good at learning grammars and fractals aren’t arbitrary images. For general image recogntition NNs are fine so there’s no very good reason to reinvent the wheel.
Regarding Metagol and Louise - well, there are tradeoffs. Metagol is more expressive, whereas Louise is more efficient. Metagol is better at learning recursive theories from fewer examples, but this expressivity comes at a high computational cost and so it can only learn short programs (up to about 5 clauses). Louise may need more examples to learn a recursive theory but it can learn really large programs (a few thousand clauses).
A few of my colleagues are actively working on Metagol. For neurosymbolic work using Metagol see Wang-Zhou Dai and Stephen Muggleton’s latest paper:
Oh and if you’re interested in NeurSymbolic AI, keep an eye on the first IJCLR, a conference that will bring together people from a few different communities in symbolic machine learning: