Goals in freeze are not tail call optimized

from_to(N,N) :- !.

from_to(N,M) :-
    freeze(Next, from_to(Next,M)),
    Next is N + 1.

:- from_to(1, 10000000).

As the title says. I don’t know enough to say this is a bug, oversight, impossibility, etc. Any insights are appreciated.

I am building a terminal user interface (TUI) library where the top-level event loop is driven by something like the above. Everything works as hoped, but I end up blowing the stack eventually.

At a glance - why aren’t you freezing N and freezing M, because they are in the predicate arguments?

Theoretically I guess it should be possible to make this run in finite space. As is though, also YAP and SICStus keep the stack growing. Only. they have no (soft) limit on the stack size, so they finish in my machine, using 3Gb for YAP and 1.5Gb for SICStus.

I fear you nee to dig into Prolog internals and improve it or rethink your design …

1 Like

I have a fix for my needs.
In my specific case I’ve just added:

wait_for(V) :-
    \+ var(V), !.

wait_for(V) :- !,
    sleep(0),
    wait_for(V).

:- meta_predicate(trampoline_loop(2, ?)).
trampoline_loop(T, In) :-
    call(T, In, Next),
    wait_for(Next),
    trampoline_loop(T, Next).

In more detail:

The architecture of my user interface library revolves around the idea of a M0 ~> M1 term that describes a model now (M0) and a variable M1 that will be updated when the model changes.

I can also declare how sub-model changes affect model changes. For example:
assocs(M0 ~> M1) <-.. scroll_offset1-(SO10 ~> SO11)

succeeds when M0 is an association list, with key-value scroll_offset1-SO10.
If/when a new value is set for scroll_offset1 by unifying SO11 to a non-var, then M1 is bound to the updated association list. This all works via freeze/2.

These Before ~> After terms are renderable, and the view predicates are responsible for updating the data they render, hence propagating model changes upward.

At the top-level view, a model change recursively called the top-level view when updated (again, via freeze/2).
app(M0) :- on(~>M1, app(M1)), ... .

This blew out the stack.

Now I have:
app(M0, M1) : - ... and
trampoline_loop(app, M)

Functionally I have everything I want working as expected.

1 Like

Sounds like a nifty design :slight_smile: I do not get the wait_for/1 though. What would cause the variable to get bound?

It’s a “trickle up” cascade.
V0 ~> V1 represents that V1 might get bound by a direct user interface event, or via propagation.

(Term1_1 ~> Term1_2) <-.. Term2 is a syntax that expects sub-terms of Term2 to be of the form Term2_1 ~> Term2_2, where binding Term2_2 triggers an appropriate binding of Term1_1.

So I have implemented (as examples):
assoc(A1 ~> A2) <-.. Key-(V1 ~> V2) :- ...
To trigger a binding of A2 to a new association list based on a Key-Value update event.
Or:
list(L0 ~> L1) <-.. Triggers to trigger a binding of L1 when a member of L0 changes.

Code is run in with_tty_raw/1 where user inputs are directed to the component the cursor sits on, and it may update the data it is responsible for, and this trickles up the model.

1 Like

will you open source your TUI library? it seems very interesting.

Yes. I’ve solved most problems in terms of performance and ergonomics. I want to experiment more and finalize the core of the API.

3 Likes

Please do !
How about making a TUI text editor to show off your library ?
I can’t promise anything but if you open-source your library, I might try to do it :slight_smile:

In the past, I tried to write a text editor in prolog with a simple ping-pong loop, but that is obviously bad for asynchronous things… (one amongst a lot of other bad decision ^^)

1 Like