Block operator `[]` for matrix notation

Also:

?- X is 0'a.
X = 97.

Just too many ways of expressing the same value, and probably way too late to clean it up.

I think the eval_code rule still captures the cases that are currently allowed. The nasty one for my little project are X is "a"; I would like it to evaluate to a string. I guess I’ll have to invent an identity function to handle these top level corner cases:

?- X is id(a).
X = a.

?- X is id([a]).
X = [a].

?- X is id("a").
X = "a".

?- X is id("abcde").
X = "abcde".

?- X is id([]).
X =  ([]).

All this comes from (as @j4n_bur53 claims) DEC10 Prolog that didn’t have 0’a and “invented” A is "a", which was in those days syntactic sugar for A is [97] and this ./2 needed to be a function that requires its 2nd argument to be and evaluates to its first argument, provided this is a character code (Unicode 0..0x1fffff in SWI-Prolog, except for Windows where it is 0..0xffff)

Since then the “…” has been less fixed as “a” may now be [97], [a], ‘a’ or “a” (hope that is all :slight_smile: ). We can’t deal with ‘a’ as that would require us to define a function for each character that evaluates the the code (while ‘e’ already evaluates to 2.718281828459045)

I have little clue what happens if we ditch this weird thing from arithmetic. Long time ago I tried to delete tell/1, told/0 and friends. That was not appreciated :frowning: There is a lot of old code around, even (or maybe notably) in teaching material.

I’m not against adding a flag to disable evaluating [97] and friends, but I’m not sure how much it would help. That just makes A is “a” return an evaluation error.

That is an interesting idea. guess that would also allow distinguishing a prefix from a postfix operator, no? For DSLs it can be quite useful to tell ++X and X++ apart, something which normal Prolog cannot do.

For consistency reasons, I think that treating a single character string, e.g., “a” in an arithmetic expression the same as any other string is the right thing to do. This isn’t strictly upwards compatible, but I don’t think many users would be depending on this and there are (many) better alternatives.

If we had a do-over, I suspect most would agree that [97] and friends would be eliminated. Failing that, a flag is an interesting option.

For my immediate purposes, I’m primarily interested in what’s “evaluable” in the arithmetic module goal expansion, i.e., what’s legal evaluable([Code]). As currently defined Code is unconstrained, which means any single term, including a user defined function, can appear in square brackets and is “evaluable”. So this needs to be more restrictive; I’m proposing:

eval_code(Code) :- var(Code).
eval_code(Code) :- integer(Code), Code>=0.
eval_code(Code) :- atom(Code), atom_length(Code,1).

as being consistent with the current implementation. My original question was “is it consistent?”. (If a flag was introduced, to control “[97] and friends” this test would take that into account.)

Thanks for confirming the cases. This check is part of the arithmetic goal expansion, so a var still a valid choice at that point. Any illegal values, including out-of-range character codes, will be caught at evaluation time.

Added a Prolog flag max_char_code. Possibly there should also be a min_char_code and some systems allow for the code 0 in atoms and strings (e.g., SWI-Prolog) and others do not.

If you compile as optimized (-O), there is. You get an error on the above and if you compile e.g., X is [97] it is simply compiled to X is 97.

It is not wise to rely on any of such details. All this is supposed to support A is "a" and nothing more. I’ve updated the docs to explicitly deprecate this feature and be a bit more explicit on how it behaves as long as it still exists.

1 Like

Exactly the point; the compile time check for user defined function goal expansion needs to be beefed up. In my current development version with a user defined function for evaluating lists:

?- X is [1+2].
X = [3].

For those who might be interested in experimenting with extending Prolog arithmetic to support custom types for functional DSL’s, I’ve written an arithmetic_types pack as an extension to library(arithmetic) (https://github.com/ridgeworks/arithmetic_types). It’s basically a clone of arithmetic, using the same :- arithmetic_function/1 directive, but provides its goal expansion as user:goal_expansion/2 rather than system:goal_expansion/2. Effectively, this means arithmetic_types overrides library(arithmetic) once it’s loaded.

The pack also includes a few “type_modules” as a starter kit; see the pack README for details. Some examples of usage follow.

Indexing and slicing (Python model) of lists and strings/atoms:

?- T is [a,b,c][0].
T = a.

?- T is [a,b,c][-1].
T = c.

?- L is [a,b,c][1:2].
L = [b].

?- L is [a,b,c][1:E].
L = [b, c],
E = 3.

?- N is [a,2+3,c][1].
N = 5.

?- Ch is "abcd"[0].
C = a.

?- A=abc, S is A[_:string_length(A)].
A = abc,
S = "abc".

?- Ch = "D", Lower is "abcdefghijklmnopqrstuvwxyz"[Ch-"A"].
Ch = "D",
Lower = d.

The last example exploits the fact that "A" evaluates to a number (character code of A). In cases where extended arithemtic is ambiguous (e.g., is “A” a string or a number?), the existing semantics overrides the extension. (Also applies to atoms like pi, e, inf, etc.)

Note that the begin/end values of a slice default to 0 and the length of the block input argument respectively.

Expressions in lists are are not evaluated until they are the result of an indexing operation, supporting lazy evaluation. E.g., a “safe” division:

?- between(0,1,Den), between(-1,1,Num),
Q is [[-inf,nan,inf][sign(Num)+1], Num/Den][Den\==0].

Den = 0,
Num = -1,
Q = -1.0Inf ;

Den = Num, Num = 0,
Q = 1.5NaN ;

Den = 0,
Num = 1,
Q = 1.0Inf ;

Den = 1,
Num = Q, Q = -1 ;

Den = 1,
Num = Q, Q = 0 ;
Den = Num, Num = Q, Q = 1.

There are two conditional evaluations done here; the first (Den\==0 which uses atomic comparisions in type_bool) to determine if the divide by 0 applies, and the second (sign(Num)+1) to select which IEEE continuation value to use.

The pack also includes an ndarray type loosely modeled after a subset of the Python class. As is, it can be used to solve systems of linear equations using the inverse matrix formula, e.g., the solution of:

x + y + z + w = 13
2x + 3y − w = −1
−3x + 4y + z + 2w = 10
x + 2y − z + w = 1

Using flag(prefer_rationals)=true (otherwise approximate floating point values may be generated):

?- A is ndarray([[1,1,1,1],[2,3,0,-1],[-3,4,1,2],[1,2,-1,1]]),
B is ndarray([[13],[-1],[10],[1]]),
Vs is ndarray([[X],[Y],[Z],[W]]),
Vs is dot(inverse(A),B).
A = #(#(1, 1, 1, 1), #(2, 3, 0, -1), #(-3, 4, 1, 2), #(1, 2, -1, 1)),
B = #(#(13), #(-1), #(10), #(1)),
Vs = #(#(2), #(0), #(6), #(5)),
X = 2,
Y = 0,
Z = 6,
W = 5.
  • function ndarray creates an n dimensional array from a nested list (B and Vs are column vectors)
  • function inverse returns its matrix argument inverted
  • function dot is the dot product of it’s ndarray arguments

Type ndarray is also a sequence type, like lists and strings, so indexing and slicing is done the same way.:

?- X is new(ndarray,[3]), X[_:_] is ndarray([1,2,3,4,5])[1:4].
X = #(2, 3, 4).

Indexing and slicing are examples of polymorphism; the same function “template” used with different types, in this case lists, strings and arrays. See the pack README for an expanded explanation.

1 Like