There is no better way to deep dive in a language than attempt to help someone else with their problems… a simple question on SO brings me on my knees… was suggesting to statically declare such arrays, likely with this code (undebugged, for the reasons to follow)
:- module(arrays, [fill_data/0, sum_a1_a2_a3_to_a4/0]).
:- initialization alloc_arrays.
alloc_arrays :- % cannot use maplist here (must be lambda_free, ok, I understand)
decl_array(a1),
decl_array(a2),
decl_array(a3),
decl_array(a4).
decl_array(Name) :-
length(Args, 1 000 000),
Array =.. [Name|Args],
assertz(Array). % this chokes on max_arity
...
I understand (I think) the need to keep the arity low.
Seems related to meta_predicate compilation ( call/N ) generality.
But the documentation is lacking about this topic.
?- current_prolog_flag(max_arity,C).
C = unbounded.
?- set_prolog_flag(max_arity,1 000 000).
ERROR: No permission to modify flag `max_arity'
In SWI-Prolog, terms have no arity restriction (except for memory), so you can do functor(Array, a, 1 000 000). Predicates however has a max arity of 1024. This is a compile time (#define) constant. It isn’t very expensive to raise that a little, but making it truly unbounded requires some redesign in the predicate calling mechanism. You can do
?- functor(Array, a, 1 000 000),
assert(my_array(Array)).
You’d have to check the ISO standard. AFAIK there is only one max_arity flag and I do not know whether it refers to compounds or predicates or possibly other roles in which a compound may appear? I don’t see much reason to act on this. If anything, it could be getting rid of the limit completely. That is probably mostly just for elegance. I do know at least one user who runs an application that needs a higher arity and changes the default before compiling the system.
I would be more in favor of max_arity_procedure or max_procedure_arity. Compiled what? Some systems also have a notion of compiled vs interpreted. Seems more systems have different limits for terms and predicates (or should we say procedures?). Testing ECLiPSe we get
out of range in assertz(...)
Otherwise I’m not against such a flag. Might even make it writable On the other hand, once started it may be better to remove it …
I mean you can compile anything to anything. What is constrained is compiling a term as (to?) a clause. That may be a VM restriction or something completely different. In this case the VM instructions can perfectly represent clauses with unbounded arity (limited by memory only). It is the actual VM implementation that needs to check there is enough space for the arguments before it knows the number of arguments. By limiting each predicate to have max 1024 arguments we just check there is space for 1024 arguments. Now you can also see why raising the limit is cheap. The performance impact is zero, but it causes a local stack overflow to be raised (even more) too early
In my view it is the system that cannot handle procedures with more than N arguments. Whether that is due to the compiler, VM implementation or the location of Pluto in relation to Venus is in my view irrelevant to the user.
Describing the clause indexing using flags is an interesting idea. There are so many limits, thresholds, etc that I doubt this is meaningfully possible. Surely, I’m not going to try.