Just clone the git repo and check out src/pl-wam.c (should be renamed
). The interpreter is PL_next_solution(). Most of the body is defined in src/pl-vmi.c, a file that is â#includeâ-ed into src/pl-wam.c. PL_next_solution() is hard to read, notably because it plays around a lot with cpp to compile the actual implementation of the instructions in the described three ways.
Iâm also a little puzzled by the large number of local variables. I suspect they are caused by helper variables used by the many inlined functions. Without optimization these are apparently all allocated independently, so the system fails to compile. Using -O1 it does compile, but apparently there are still quite a few left.
The recursion issue (typically not deep, but used by callbacks such as with_output_to/2, sig_atomic/1, etc.) appears to be occur because setjmp()/longjmp() in Emscripten is implemented by wrapping WASM functions in JavaScript functions and use exceptions. This seems to copy (part of?) the local variables of involved WASM functions to JavaScript. I must admit I donât understand in detail what is going on. The alternative is to use (still very new) WASM exception handling. This indeed solves running out of JavaScript stack, but reduces performance by about 30%. That seems to be caused by the boilerplate to make it possible to handle exceptions.
Well, is see two ways out. One would be to define a new big number API to be used in Prolog itself that exploits LibBFâs capabilities to gracefully handle allocation problems. Than this API should be implemented on top of LibGMP and LibBF. That wonât solve the issue for GMP, but the WASM version uses LibBF for reduced size and simple uniform license. Mostly, it is a lot of work. I guess the simplest way is to define libGMP wrappers that are not void functions, but return a failure code. Next both need to be implemented and all Prolog bignum code needs to check the return values and act appropriately.
To make LibGMP safe under this, all functions would need a setjmp/longjmp() wrapper inside. That might get rather slow 
The other way out is have setjmp/longjmp around arithmetic evaluation rather than globally. That requires some serious redesign of the VM for handling optimized arithmetic. Most likely it will also introduce a noticible slowdown.
Yes another thought might be to wrap PL_next_solution() in a helper that does the setjmp/longjmp. ChatGPT says this will make things only worse, but I have my doubt this is right. Anyone?
Also possible would be to introduce C++ and use C++ exception handling. Looks like a lot of work with uncertain benefits though and I doubt we can make LibGMP safe this way.
Also possible is to estimate the size of the LibGMP allocations and prevent overflow this way. That is in part already done, but some functions are probably non-trivial.