Now I got an idea how to asyncify the ISO core standard I/O.
The idea is very simple and I try to solve one problem: How
can we make for example get_code/2 asyncified, without
sacrificing performance? I came up with this sketch:
get_code(S, C) :- get_code_buffered(S, C), !.
get_code(S, C) :- async_read_buffer(S), get_code(S, C).
There are some unresolved issues in the above code, like
how return end of file, i.e. -1 or how to deal with surrogate pairs.
But essentially the idea is that the stream S has a buffer
somewhere and that the fast path is to read from this buffer,
and we only need to yield when we replenish the buffer via
some I/O. This could give a quite fast asyncified get_code/2.
Is there any Prolog system that did already something like this?
I always see that get_code/2 is a primitive and was also folllowing
this approach in my systems so far. In the above I bootstrap it from two
other primitives, a synchronous one and a asynchronous one,
make it itself not anymore primitive. get_code_buffered/2 would
fail if it has reached the end of the buffer.
Edit 10.08.2022:
But there is a lot of work to do, peek_code/2 needs a similar
treatment, and then in my system I have also get_atom/3, which
would need some treatment. Scryer Prolog would go for the
XXX_char predicates first, if they are present in a Prolog system
they would also need some treatment. Also the Python and the
JavaScript target might turn out different. In JavaScript there are
different fetch modes, chunked and non-chunked. In non-chunked
we might get the whole file in one go, a second call to async_read_buffer/1
would not exist. Chunked is related to what SWI-Prolog calls http
streaming. I don’t know the details yet. And then maybe it should
transparently choose something synchronous if asynchronous
behaviour is not demanded from the Prolog interpreter at all.