We have a segfault

Thanks Jan! I know I’m abusing table/1 and untable/1 with dynamic predicates- that hadn’t caused trouble until now so I kept using them.

It all has to do with the roundabout way I’m loading training data into my ILP systems. I described what I do here:

It’s a bit complicated but, in short, I have an inductive meta-interpreter for ILP that is trained with data asserted to the dynamic database. I have to table the meta-interpreter because if I don’t it happily constructs, and tries to prove, left-recursive programs, and then it can go infinite. It doesn’t help to table the programs it constructs because it’s the meta-interpreter itself that goes infinite.

The problem is that if I change the training data, with assert/retract, and re-run the meta-interpreter, I sometimes get the results that are consistent with the original training data, before the database update. I’m not sure exactly how or why that happens (I think the updated training data are tabled as dynamic and incremental automatically - which is neat btw) but it seems clear that the meta-interpreter is retrieving the results stored in the tables and ignoring the updated training data.

So I need to clean up the tables between training runs. I can use abolish_all_tables/0 for that, like the Dox suggest, but if I untable the meta-interpreter, I have to table it again the next time it runs or it will go infinite. So I use table/1. Is that’s what’s causing trouble? And how else can I restart the tabling process?

To be honest I’d rather not have to do the tabling/untabling all the time because it can be expensive, and slows things down if I set a large table space limit, but I can’t think of what else to do. I don’t want to force the user to start a new session for every new training run.