Hello,
From what i read about how Engines work – they are attached to a thread and cooperatively yield once work is done – it seems that one could not program a preemptive schedule of engines.
Is this correct?
thanks,
Daniel
Hello,
From what i read about how Engines work – they are attached to a thread and cooperatively yield once work is done – it seems that one could not program a preemptive schedule of engines.
Is this correct?
thanks,
Daniel
thank you —
I was thinking about a virtualization of engines, so that many more engines could be scheduled that threads existing … i guess this is not possible
Thanks.
Yes, I misunderstood – i thought that BEAM is preemptive, but its not.
Instead, apparently, whenever a function is called within a Process code, a “reduction” is calculated, and when a limit is achieved the process is suspended by a yield.
In Erlang its apparently a built in thing that the compiler provides for scheduling – so reduction counting and handing is at the VMI level.
In essence the analog would be that every call in an engine would need to be wrapped in order to calculate a reduction and then yield when a limit for an engine is achieved.
Looks like this can be done, but it sound very expensive – since every predicate call in an engine needs to be wrapped.
Swi prolog would need to be extended to support a BEAM like cooperative scheduling of engines by adding VMI instruction wrappers to calls and reduction handing and, i guess, hooks for a scheduler.
Edit:
Btw, there are no loop constructs in Erlang – its all (tail) recursion, hence function calls – so, the reduction based yield algorithm can be injected in such loops as well.
Dan
Perhaps hooking into the inference counter is a good idea …
In Erlang the reason they use reduction and not preemption is to ensure completion of certain tasks such as tagging of integers – which can not be interrupted.
However, i guess in Prolog an engine could simply run for x inference steps and then yield.
How does one hook into inference count – i noticed call with inference limit, but that’s not what is needed here.
Edit:
How is inference count actually defined - what exactly is counted?
That’s quite a project – since i have no idea yet how to program hooks going from C back to Prolog, nor how hooks work in Prolog, nor where inferences are counted, etc.
As a first stab at Erlang style processes i would probably try use to use wrapper to inline a reduction and yield check – or simply manually put a yield check into each predicate called from within an engine.
A good start is:
git grep inferences
The docs and implementation of call_with_inference_limit/3 also help.
Yes, you could combine this with engine_yield/1 to achieve a new type of yield. I don’t know how useful that is. Maybe you can do interesting stuff. As explained before though, SWI-Prolog thread/engine handling doesn’t really scale into the millions. The main reason is in memory management, both atom/clause GC and discarding old objects such as outdated clause indexes, hash tables, etc. All these do linear scans through the existing engines and threads (they are internally the same except for a few flags). Thousands of these work fine. Depending on hardware and Prolog workload things go wrong (as in gets slow) at some point. Applications that do not create (many) atoms and do not retract (many) clauses may scale quite far.
Thank you.
My intent is to use this only for analysis of (graph) structures rather than creation / modification of the graph in the KB. So, perhaps, for this more constrained use case, it may have good merit.
Although, i do need to aggregate results – need to see how this could be done, in such an isolated and concurrent environment – perhaps by passing in and out a dictionary or something.