Linking from clause store to terms on the stack

Currently, Prolog makes a clear distinction between facts stored in the fact-store, and terms created and referenced during goal processing in predicate variables that are stored on a stack space.

Facts in the fact-base are not backtrackable (unless one designs an explicit assert wrapper, that retracts a prior assert during backtracking) – hence an assert and retract is permanent, while terms referenced in variables during the processing of goals is backtrackable by default.

Could one evision a hybrid approach, where facts in the fact base have a pointer to “attributes” that live on the stack and are fully backtrackable.

The benefit of such a design could be the lookup time of such attributes – which can be reduced to a pointer based lookup, given a reference to a fact in the fact store.

Algorithms that temporarily associate meta data to facts for the purpose of processing, would experience significant lookup time savings – not unlike attributed variables – where attributes associated with variables during processing have fast lookup times (since attributes are associated with variables via pointers).

I am not sure that a reference data type would suffice, to have this work.

You have to actually extend the fact store with another column as it were that is managed internally, since from the user point of view its always person(X) and not person(X, External_Ref_Blob).

Also, its a bit circular External_Ref_Blob would need to refer to the clause person(X, External_Ref_Blob) which can’t be done until its asserted. And, if it refers to person(X), then you add lookups, and loose the benefit of one pointer hop only.

It seems to me that what I have in mind requires extending core parts of prolog quite like attributed variables required extending core parts.

Dan

Thanks.

It looks like i wasn’t able to explain what i am looking for.

Let me make up an artificial example, just to illustrate.

Consider the following code. It essentially, it’s aim is to associate with each item(a), item(b) a calculated value. Due to backtracking, the calculated value for each item(a) and item(b) can change.

In the code below an assoc is used to map between item(a) and item(b) and respective calculated values, simulated here by the long_calculation/2 predicate. During backtracking, the assoc backtracks as well, and gets the next calculated values for each item(a) and item(b), respectively (one each, for each next backtrack request).

Now, the mapping is assoc based – so any time one wants to retrieve the current calculated value of an item, you have an assoc lookup – which can become expensive for a very large number of items – say, in the millions or tens of millions or more.

However, in C you wouldn’t use an assoc to map between an item and a calculated value – you would use a pointer (or simply another adjacent memory cell, in such a simple case of one value) to where item(a) is stored.

If you have a reference to a in the item table, then you can immediately retrieve the calculated value, say, via the pointer to its memory cell.

In Prolog, you have to take atom ‘a’ as key and look it up in the assoc to get the value. There is actually no other way to associate any item(a), item(b) to a backtrackable value, but use some kind of key-value data structure – and this is due to the dichotomy in Prolog between the fact store and the stack for terms bound to goal variables.

(one can implement it with backtrackable assert, but that is even more expensive, with all the indexing going on in the background, or one can read all items into a term on the stack, and use attributed variables (somehow) to associate via a pointer each item with its calculated value – which, however, replicates all items in stack memory – which is not feasible).

However, what if, in prolog along with item(a) that is stored in the fact base, one can store a pointer to an (backtrackable) attribute Value (or Term) as well, via a & operator, then a lookup item(a)&Attr would immediately bind the calculated value to Attr – and thereby save an assoc lookup, as in the code below.

For a very large list of items (and hence assoc) the saving can be substantial during lookup and during put – in particular when a put and get attribute is performed in a core area of the system that gets called a lot.

Dan

item(a).
item(b).


long_calculation(X, Count) :-
	item_result(X, Count).

item_result(a, 5).
item_result(a, 6).
item_result(b, 2).
item_result(b, 3).

found_items(Items, Assoc) :-
	empty_assoc(Assoc0),	
	aux_found_items(Items, Assoc0, Assoc).

aux_found_items([], Assoc, Assoc).

aux_found_items([First | Rest ], Assoc0, Assoc) :-
	item(First),
	long_calculation(First, N),	
	put_assoc(First, Assoc0, N, Assoc1),
	aux_found_items(Rest, Assoc1, Assoc).

top_level_process(Items, Result) :-
      found_items(Items, Assoc), 
      process_found(Assoc, Result).


?- top_level_process([a,b], Result).

An assoc or rb_tree lookup is O(log N), so the cost for a 100K is ~16; 1M is ~20; for 10M is 23. Are you sure that this will be the most expensive thing you do?

If you were to, for example, use a hash table in C, then the backtracking could also be moderately expensive (it’s pretty cheap for the assoc/rb_tree).

If I were you, I’d implement and measure rather than try to figure this out. I’d also create an abstraction layer that allow switching between assoc and rb_tree. I did that for a symbol table in https://github.com/kamahen/pykythe/blob/9433a4516959f7ece1af7c8d081ce854e3cbbc9e/pykythe/pykythe_symtab.pl#L69
My symtab grows to about 14K entries (log2: ~14) and it’s far from the most expensive thing in my code, according to profiling (and my code does a lot of both lookup and insertion). Your use case has a cost of roughly 2x mine, so I wonder if you’ll have a performance problem.

Thank you for the observation.

Profiling leads me to see that a significant time is spent within the assoc put and get

The question is whether this can then be improved by other, more direct approaches, or whether i am looking down the wrong rabbit hole – and the focus needs to be indeed elsewhere.

Edit:

I have a few areas i am looking into – one of them is better handing of such a mapping – another is faster lookups of “adjacent” nodes – which are currently fact-based lookups.

Dan

For me, rb_tree gave a definite performance boost. (I don’t remember the details; but I think rb_tree were faster at inserts than assoc and similar for lookups – your use case might be different, of course).

thanks. I will need to study this approach to putting vars in the value.

i guess this also works when the initial list of items is unknown and itself part of the algorithm

Peter,

I am a bit confused — what is really the impact of a linear lookup at a cost of 16 vs. a pointer lookup in performance critical code.

I am for example told that in C++ for performance critical code (e.g. real time) I should avoid the use of virtual function calls – and stick to fixed addresses.

And here we are talking about (on average) 16 steps of misses lookups, before a value is obtained.

How does one square such two views ?

Dan

There are lots of options. You can also represent a graph “the C way”. Each node simply becomes a compound that has fields for whatever properties you want to attach to the node, including a list (or assoc) of outgoing edges (and possibly incoming edges). The elements in these edge lists can be the actual target nodes rather than some node identifier. The resulting structure is highly cyclic which means you need to be a bit careful traversing it to avoid running in circles. For the least I’d associate a unique id to each node. You can extend this data structure with an rbtree that maps unique node ids to the node compound. You can update the graph using setarg/3. Such as structure is a bit work to setup. I think the most useful property is that it is subject to backtracking. And yes, some operations are fast but many others are rather clumsy. It all depends on the critical actions you need perform.

Hi Jan,

Thank you.

And my apologies for laboring this same point again and again …

But, i guess, this all presupposes that the graph is represented as a Term linked to a Variable rather than in the fact base.

Do I understand this correctly?

I am looking for a performant way to keep the graph in the fact base, while still allowing very fast access to (retractable properties) and (static) adjacents …

Dan

Sorry for the confusion …

I think there are two separate issues here, but connected.

  1. how to implement graph properties that are backtrackable in a performant way.
  2. how to implement a graph so that adjacent lookup is highly performant – although, the graph structure itself doesn’t need to be backtrackable – graph creation code can be placed within transactions.

In the design of a graph structure in c one would take care of both requirements in the same graph data structure.

Dan

This is indeed what i am wondering about, suppose there is an N that is very large, a cyclic – i.e. such the lookup happens a lot of times and and all the time during processing.

The although c2-c1 might be quite small N(c2-c1) adds up quickly – its what I lliked to call the “background” noise that creates a compute drag …

Just as an analogy think about a real time system that scans an environment every second or fraction of second – such a hard upper bound might play into this.

Dan

great point :slight_smile:

Its true – since my work is research oriented – i may learn, eventually, that I have a lower bound i can’t pass – unless i rethink the whole thing.

So, i will eventually need to test N*c1 – and to learn what c1 is, and to see if it is low enough in absolute terms.

I wonder if the network model could be integrated into Prolog.

Since Prolog is inherently relational, it would probably be an implementation directive, whereby “access paths” across predicates that are thread from foreign to primary keys is optimized …

So, to fantasies about this a bit :

Consider an adjacency graph representation, along with a directive:

:- hierarchical(adjacent, cyclic).

adjacent(a, [b, c, d]).
adjacent(b, [e, f, g]).
adjacent(f, [h, a]).

Might somehow optimize the following for access where arg1 is ground and when “cross-predicate” lookups are made, with an element in the arg2, in the next head lookup

e.g.
Given X=a, adjacent(X, [H|R]), is retrieved conventionally, however, when later another lookup is called with H in position 1, such as so:

adjacent(H, [H2 | R2]),

then this this could internally translate into a pointer lookup …

Edit:
Somehow, H would keep track of its originating predicate, and, instead of initiating an indexed lookup, would “reach back” to the “primary” key, and its clause and identify the pointer that links a to the adjacent(b, [e, f, g]) predicate.

Edit:

Bidirectional could then also be supported, if requested – at the cost of more storage space usage:

:- hierarchical(adjacent, cyclic, bidirectional).

Dan

I still don’t understand why your “adjacents” are not normalized, esp. if they are static (as you write higher up in the thread). Writing convoluted code in Prolog shouldn’t be a goal in itself.

If you write the most obvious code, and the algorithms have an acceptable complexity for your use cases, and you still don’t get the performance you need, you can at least profile it under the workloads you have (you’ve done so already) and try to re-implement the most sensitive parts in a language like C. SWI-Prolog’s implementation itself does that, for example for the extra-logical sorting, or something like is_list/1.

Its for demonstration purposes, but also, to show a hierarchical structure.

Also, mathematically, speaking, adjacenies lists are lists, but yes you are (again) right – this is not a good representation.

Sure but also mathematically speaking, there is no difference between:

adj(a, [b, c, d]).

and

adj(a, b).
adj(a, c).
adj(a, d).

It is exactly the same thing. It is also a hierarchical structure unless I am missing something.

1 Like

Another way to represent a graph is to use the adjacency matrix. You can represent a boolean matrix in Prolog using either a (flat) compound term or even an integer. Have you measured what would be the lookup time? Esp. if the graph itself doesn’t change it might not even be a terrible idea.

As usual you are right.

I should be looking up what level of normalization hierarchical databases used to have … i wanted to illustrate this.

Dan

First of all, it’s not necessarily a 16x cost - O(1) vs O(log N) have constant factors, e.g., the cost might be O(1) = 50 * 1 vs O(log 100K) = 3 * 16. (The difference probably isn’t as great, of course).

Secondly, there can be knock-on effects from a choice of data structure. For example, many databases offer both hash and btree indexing, and the usual choice is btree even though it’s O(log N) – the reason is that after searching, the results often need to be merged or sorted and for that, btree results are already sorted whereas hash lookup requires an additional O(N log N) (with a smaller N); also, there can be a big constant factor difference, especially for data that’s stored on disks, due to head movement for random seek. On top of that, inserts/deletes in hash tables can have a much larger amortized constant factor.

If, as you say, the lookup is the most performance-critical thing, then it may be the case that some kind of O(1) lookup will help – but it might not help as much as you hope because it might slow down other parts of your code (in your case, if I understand you correctly, the backtracking/removal of items).

Also, unless performance is really horrible, I would focus on the most clear implementation and get it working; optimization can usually be done afterwards. (This is not always the case – sometimes the choice of representation requires a large rewrite; and I have seen programs where an astute choice of representation has made an enormous difference.)