I’m reading data from a csv file and asserting it with a dynamic predicate.
It’d be good to table this - it’s a lookup table, after all - but it’s dynamic.
Will this work? What’s best practice here?
I’m reading data from a csv file and asserting it with a dynamic predicate.
It’d be good to table this - it’s a lookup table, after all - but it’s dynamic.
Will this work? What’s best practice here?
For me this is a question of the form, which data structure would be better?
Better in this case depends on the needs. Will writes outnumber reads? Will there be a lot of data or a little data which determines the desired Big O to find an entry. Does the data need to persist from invocations, (if the system is halted will the data be automatically reloaded). Can the Prolog representation be considered the authoritative source of the information or if there is a question then one would have to go back to the csv files. Are there multiple threads? You get the idea of what to consider.
Also the answer my not even be a Prolog data structure, the data might be better stored in a server and accessed though an interface, E.g. SQL database, Redis server (ref), etc.
I just read in a csv file and maplist over rows asserting the contents into facts for later use.
Yes, we reload the csv file on startup. This is the program that makes it into prolog - and honestly,
it’s a 200 line file, load time’s not an issue. What is an issue is access time for the asserted facts.
Is tabling even a speedup here? am I not manually ‘tabling’ the data?
If it is just facts (and I assume so, given the CSV input), no. Well, there is a very small maybe in the fact that tabling uses a trie for indexing and clause indexing uses hash tables. There are some (probably obscure) cases where the table will win.
adding the dynamic predicates p1, p2, p3, … in the form e.g.:
assertz(p1).
assertz(p2(1,2,3)).
assertz(p2(11,12,13)).
assertz(p3(a,b,c)).
and afterwards apply logic on them should work.