I’m using: SWI-Prolog version threaded, 64 bits, version 9.3.19-13-gc4848cb0b-DIRTY
and also with version 8.4.1
When I am using a a variable in a lambda in a single branch only and I reload the code and recall the predicate then the variable is free after that. But it should be set.
p001( X, Y) :- true
, A = 100
, call( [X,Y]>>(
X == 1 -> Y = [X] ;
Y = A
), X, Y )
.
The log:
(ins)$ swipl -s cnf_pretty_printer.pl
reading ~/.swiplrc (FS)
Welcome to SWI-Prolog (threaded, 64 bits, version 9.3.19-13-gc4848cb0b-DIRTY)
SWI-Prolog comes with ABSOLUTELY NO WARRANTY. This is free software.
Please run ?- license. for legal details.
For online help and background, visit https://www.swi-prolog.org
For built-in help, use ?- help(Topic). or ?- apropos(Word).
(ins)?- p001( 2, X).
X = 100.
% so far so good
% now I rewrite the same code (or touch the file)
(ins)?- make.
Warning: /home/ox/tmp/2025_01_31_swipl_crash/cnf_pretty_printer.pl:2:
Warning: Singleton variable in branch: A
% /home/ox/tmp/2025_01_31_swipl_crash/cnf_pretty_printer compiled 0.00 sec, 1 clauses
true.
(ins)?- p001( 2, X).
true.
% X should be set but is free
A workaround is to set the variable in the branch.
And I would like to know how can I avoid the errormessage when I use a variable in a single branch only.
It is not bad to have a warning there but under some circumstances I want to have a local exception from this rule.
I want to read in some config variables in one step per nb_getval( name, [A, B, …]) at the beginning of a predicate and later use them whenever I want, even in a branch.
Now I understand. I didn’t expect that this branch warning is no longer a thing with this contextual variable declaration. Because I referred the branch only to the softcut alternative branches. It solves indeed the problem.
Additionally I observed, that it is possible that the warning message does not appear but the wrong behaviour happens. I think this bug is severe because it can happen that nobody notices it - you get just wrong results.
library(yall) is broken in the sense that the compiled behaviour differs from interpreted behaviour for variables shared with the rest of the clause. If you load the code without explicitly loading the library it is used interpreted. If you then use it and reload, it is compiled. So, always make sure library(yall) is loaded and make sure all meta-predicates through with it is called are loaded (or built-in). You can enable
:- set_prolog_flag(warn_autoload, true).
to get a warning about libraries that use goal/term expansion and are not loaded before the code is called. The default is still false. Might change to true shortly. It still occasionally gives a few somewhat misleading warnings
If you use library(yall), you must really understand it I recommend using listing/1 on the predicates in which you use it to make sure it is compiled. Interpreted yall is semantically different and very slow.
I was reading again the docs about library(yall), seems the comment by LogicalCaptain at Don’t forget the bracy part section covers the behaviour you found.
I tried expand_term and expand_goal but it didn’t change the content of the TERM variable.
So that means that I cannot always be 100% sure which version of the code is called. I mean even call([]>>(), ...) looks like that it puts the yall term into a variable (parameter).
But it should be consistent as long as I don’t touch it ( with set_prolog_flag(warn_autoload, true) set) .
But I am still not happy with the idea that the same term can lead to a different behaviour and that a possible warning message is maybe not visible.
AFAIK (it is a long time ago I studied this), this is fundamentally unsolvable. Probably the most reliable way would be to remove lambda expressions as callable predicates. Than they are always compiled (or produce an undefined error).
You cannot do runtime goal expansion on a yall expression because it needs to generate a helper predicate. It could of course generate a dynamic predicate. That would be even slower while cleaning these predicates gets rather involved.
These troubles are the result of code-is-data (homoiconicity) and lack of typing that do not allow us to figure out that an expression is in fact code and which parts of the (partly instantiated) expression are part of the code and which parts are data.
Possibly it would be an alternative solution to always compile yall expressions if library(yall) is loaded in the context, regardless of their position. This would imply that such code cannot have yall experssions as data. That probably would mean that the toplevel also needs to compile these and manage the generated code.
I think, since most of time (I assume) the expression will be used in a loop, the generation time should be compensated, more or less like JIT. But I understand mine is a naive point of view, in the same ballpark as my preference for unambiguous semantic.
Yes. Jit would solve the problem. A call of a term should be the trigger for a possible compilation step. The point is: where comes the term from. Comes it from a static position in the sourcecode or is it dynamically generated or is it generated from several different terms at different static positions of the sourcecode so that a partial compilation and later composition would make also sense. How often has the term to be compiled for which variation. And how do you propagate an error in case of a failed compilation.
The question is how do you associate the compiled code to the datastructure. In the easiest way the user does it by himself. In R for instance the user can compile a function based on a R function.
(ins)> library( compiler)
(ins)> f <- function(x) x+1
(ins)> fc <- cmpfun(f)
(ins)> fc(2)
[1] 3
When a user wants to have compiled code he then could decide it by himself. And that could maybe be the fundation for a later JIT system.
Anoter Idea is: when we have such a local compiler it would be interesting to compile a predicate with a given parameter (but not all parameters). Like in Haskell where you can already provide the first parameter and get a function with the remaining parameters. A partial parameterized predicate could also be much faster when the provided parameters lead to a reduction of code.