Tabling concurrency: new posix functions

This may (or may not) be relevant since Jan is working on the concurrent implementation of tabling. Several new posix functions are now available in the newly released glibc 2.30

Add new POSIX-proposed pthread_cond_clockwait, pthread_mutex_clocklock, pthread_rwlock_clockrdlock, pthread_rwlock_clockwrlock and sem_clockwait functions. These behave similarly to their “timed” equivalents, but also accept a clockid_t parameter to determine which clock their timeout should be measured against. All functions allow waiting against CLOCK_MONOTONIC and CLOCK_REALTIME. The decision of which clock to be used is made at the time of the wait (unlike with pthread_condattr_setclock, which requires the clock choice at initialization time).

This is important to make stable systems, because CLOCK_REALTIME can go backwards or make strange jumps in some cases (it took erlang quite a while to solve this problem).

Thanks for the tips. Timed operations are not part of this work though. Most is about lock free handling, a little mutex handling and a condition variable. It is good to see a timed mutex wait though as that should allow us to make waiting for a mutex interruptable.

This is the best! :grin:

I made a test what the table/1 directive exactly does in SWI-Prolog. It seems that the default for the tabled data is thread local. So that you anyway don’t need any locks. You can test it as follows. I used the following code:

:- table concat/3.
concat([], X, X).
concat([X|Y], Z, [X|T]) :- concat(Y, Z, T).

Now you can run and inspect:

?- concat(X, Y, [1,2,3]).
X = [],
Y = [1, 2, 3] ;
Etc..

?- current_table(V,R).
V = concat(_6672, _6674, []),
Etc...

Now you can open a new thread. Use the menu item “Run”
new thread. Try inspect again, there is nothing:

?- current_table(V,R).
false.

So my conclusion was that tabled data is thread local
by default. Please correct me if I am wrong.

If the tabled data is not thread local, you will need something else in the tabling implementation to access and modify the tabled data during a tabled call. But I did not yet consider operations that also involve a time out.

But this could be also useful. But it would be a little redundat, since there call_with_time_limit/2. You would only need an additional tiime out, for some deadlock situation. I don’t know enough about concurrent tabling,

whether it might run into deadlock situations. It might depend a little bit how you organize your concurrent tabling and whether you allow memoized functions to run concurrently even if they update later the same key.

Right. Using the just-pushed git version you can do …

:- table concat/3 as shared.

… and you’ll get a different result. For a description:

?- help('tabling-shared').

Whats the difference between “as shared”, and “as dynamic”?
“as dynamic” is mentioned here, but “as shared” is not listed:

:- table (:Specification)
https://www.swi-prolog.org/pldoc/man?predicate=table/1

‘as dynamic’ allows for incremental tabling, e.g. clauses can be modified with assert, retract and friends.

I checked the pushed git version. My current future plan is to use the same multiway trie that I already use for clause indexing. That the same trie is also used for the tabled data. But I am not yet there. I only started thinking about this recently.

Currently tabled data is stored in a map and not in a trie. Just a map with suitable variant term key. The variant term key is just the skeleton of the subgoal that is called. Every subgoal call basically makes first a call to copyterm to get a skeleton.

So maybe there could be some benefit in using the same multiway trie that is already used in my clause indexing. Also with hindsight that my multiway trie, including the JIT-ing, has already a locking solution for shared data, this is basically needed for dynamic predicates

that are not thread local. I dont want to reinvent the wheel. What I dont understand is how the SWI-Prolog trie works, What would be a scenario that this returns true:

From src/pl-tabling.c
is_deadlock(atrie)

Is this something related to solving recursion? Or is this something
related to shared tabling? Or are both involved?

BTW: I think its amazing, that you can use an ordinary map also for tabling, compared to a trie. An ordinary map has also some advantage. If you are not really short of memory, allow yourself a full skeleton for every key:

  • If the map is realized as a hash table:
    you can compute a hash value over the full skeleton for, taking into account all parts of the key, common prefixes among many keys and uncommon suffixes.

  • If the map is realized as a red-black tree:
    no hash value is compute, but lexical comparison is use. The comparison will do unnecessary work among common prefixes, but it will also respect uncommon suffixes.

Mostlikely tries are only applied because of speed, and not because of memory saving. If you have a sub map in a trie, this sub map need also map entry space, so that in the end its the same space usage as a full skeleton for? Maybe a little less.

But then the speed gain might be okay when the map is a tree. But does the trie also play well compared to a simple map where we compute a hash value of a full skeleton? This hash value might point the way much better than a trie?

Well, will only know through testing, sooner or later…

It is quite easy. A thread involved in shared tabling may be waiting for some answer table that is owned by some other thread. So, thread A may be waiting for thread B. If we can create a cycle of such dependencies we have a problem :frowning:

Sure. You can use any data structure. Tries have advantages and disadvantages. Among the advantages are the fact that they are quite good and finding term variants and search and adding is done in a single action, making it relatively cheap to insert an answer into a table and know whether it is new or not. In most tabling scenarios we have either nodes with a single child or nodes with many children. Note that answer tires do not contain entire terms, just the (possible) instantiation of the variables of the call variant.

Once completed, SWI-Prolog compiles the trie into a clause that enumerates the members very efficiently and whose life time is subject to clause garbage collection. The clause is significantly more compact and future versions will remove the actual trie as this is no longer needed.

All this is in the tabling literature, although there is quite a bit of reading to grasp all this and some of the papers describe outdated implementation techniques. I’m grateful to Theresa Swift and David S. Warren for their guidance.

For tabling, in my opinion, you don’t need a lookup
that does an insert at the same time. Although its
nice to have. You can check yourself, for example ack/3.

An insert is done if a cache miss happened. If you
have a lot of cache misses, you don’t need tabling.
There should be a ratio >1 between cache hit and miss.

I don’t know yet how to measure this.

Edit 06.08.2019: ack/3 is a bad example, I just see.

/* hit/miss count for ack(3,12,X) */
?- count(X,Y).
X = hit, Y = 16393 ;
X = miss, Y = 81923

Maybe fibo/2 is a better example:

/* hit/miss count for fib(31,X) */
?- count(X,Y).
X = hit, Y = 29
X = miss, Y = 32 ;

Also not that better. I see, maybe its nevertheless important
to have a fast miss and insert at the same time. Not sure,
thats only a constant factor.

I have nothing to do than write posts here, since my Windows 10 machine is busy with installing updates. The usual protocol, if you want a miss and insert, for maps, tries, … whatever, is this here:

putIfAbsent
computeIfAbsent
https://docs.oracle.com/javase/8/docs/api/java/util/Map.html#computeIfAbsent-K-java.util.function.Function-

You can get deadlock when you use computeIfAbsent. I will try to not use computeIfAbsent in one of my prototypes. The idea is to allow a kind of race condition, that two threads compute the same value for a single yet missing key.

The hope is then, that on the average, this happens not very often. You find this kind of programming idiom all over the place in Java. You can also combine it with putIfAbsent. But this is not crucial, since you perform a get in advance.

Edit 07.08.2019:
I just see the “lock free” version, has even some horrific limitations. Not sure whether the newest version of this collection class has still the same limitation. But the fib/2 wouldn’t work:

computeIfAbsent() never terminates (bug or feature)
https://stackoverflow.com/questions/28840047

Anyway I do not intend to use this thingy, but its still interesting
to see, that there are non trivial issues.