How to clear private tabling tables for a thread

I’m using: SWI-Prolog version 8.1.19

I want the code to: clear private tabling tables for a thread.

I would like to have a tabled predicate to live through a higher level predicate call. When the call returns/exits, tabling tables shall be abolished/cleared. Another call comes in and it shall start from fresh.

Doc stated that private tables are abolished when the thread exits. In my case, the thread will not exit, instead it will continue to service other request/users/calls. The thread belongs to a thread pool.

What would be the appropriate way to abolish/clear the private table for a thread? I tried untable(predicate/arity) and seemed not working.

Nan

1 Like

Seems this is missing. Internally we have ’tbl_abolish_local_tables'/0, which is part of abolish_all_tables/0. You can use that, but as with every predicate whose names starts with a or which is private to some module, it can always change or disappear. Note that if you do not use shared tables abolish_all_tables/0 is fine. I guess we also need abolish_private_tables/0 and abolish_shared_tables/0.

untable/1 is typically not a good idea. Partial removal is normally done using abolish_table_subgoals/1 or abolish_module_tables/1

1 Like

Hi Jan, thank you very much. abolish_table_subgoals(ip_xquery_value(A, B)) works. But I would like to confirm that if ip_xquery_value/2 is defined as private (and subsumptive) table, abolish_table_subgoals/1 only abolishes the ip_xquery_value(A,B) tables for the calling thread (private). Is it right?
It would provide more flexibility by adding abolish_private_tables/0 and abolish_shared_tables/0.

The git sources provide abolish_private_tables/0 and abolish_shared_tables/0.

Related to this topic:

I’ve been looking at the Held–Karp algorithm for the travelling salesman problem (Held–Karp algorithm - Wikipedia). It turns out that using tabling (“memoizing”) is quite effective at reducing execution time. However, it can produce some very large tables - calculating the optimal tour of 18 cities can exceed the default table space size (1 GB.). For this reason, I only want to use tabling during the tour generation and free up the table space when finished (i.e., not partial removal).

The simple way to do this is just bracket the calculation with table/1 and untable/1. This works quite well but the profiling results show that for smallish tours (10 cities), 80% of the total time is spent untabling (i.e., it took 4 times as long to delete the tabled values as it took to create them). At 15 cities, this is reduced to 20%, although the absolute untabling time has increased (from ~0.7 sec. to 2 sec.) Most of the time is spent in abolish_table_subgoals/1. So one question is whether there is a better way of freeing up entire tables in circumstances like these.

A second question is whether an error can be avoided when table space is exhausted. In this particular application, tabling is used as an optimization (like a cache) so there is a tradeoff between space and time. Rather than raising a exception under low memory, some results (older or larger or ?) could be removed to make room. Alternatively any results that don’t fit could just (optionally) not be tabled. Other suggestions?

3 Likes

untable/1 isn’t really meant for this. It was more intended to be able to remove a :- table directive and get the correct state after a reconsult or allow playing interactively by tabling and untabling predicates. I’d normally just call abolish_all_tables/0 or one of the more selective ones. This being so expensive suggests there are a large number of tables.

To get more table space simply use the commandline option or Prolog flag. Automatic deletion of tables is indeed one of the things that should be handled someday. AFAIK no system does this at the moment … That is a whole new research area that is currently not on my shortlist :slight_smile: Most of the infrastructure is already in place, including the ability to abolish a table while it is incomplete. This causes the table to be really abolished at the moment of completion. That could be used for tables that are created to support the outermost tabled goal. But yes, sometimes this is a good idea and sometimes not :frowning:

Moving the table/1 call to a directive and using abolish_module_tables/1 does pretty much the same thing. Looking at the profiling results most of the time deleting the tables is in trie_gen which I assume is generating all the matching tables just so they can be deleted (2304 tables for 10 city tour; many times more as number of cities increases). Is there any realistic prospect for improving performance in the case when the whole set of predicate tables is being deleted?

Given there is no quantitative difference, I prefer using table/untable rather than a directive and one of the abolish_.. variants, i.e.:

salesman_HK([P1|Ps], Tour, Length) :-
	table(shortest_segment/5),
	min_search(sel_shortest_tour(P1,Ps,Tour,Length),Length),
	untable(shortest_segment/5).

Yes, but any amount of table space will run out at some point. In this type of application, tables are just used as a cache. If tabling just stopped creating new tables when space ran out, the program would run fine albeit more slowly. Perhaps this isn’t the best cache management strategy, but I think it’s better than an error. More effective “automatic deletion” strategies can wait.