Careful with 0 rdiv 0 vesus 0.0/0.0

I am now playing around with nan, for example I have
already learnt this trick:

?- X is 1 rdiv 2 + nan.
ERROR: Arithmetic: evaluation error: `undefined'

?- set_prolog_flag(float_undefined, nan).
true.

?- X is 1 rdiv 2 + nan.
X = 1.5NaN.

But here there seems to be a gap:

?- X is 0 rdiv 0.
ERROR: Arithmetic: evaluation error: `zero_divisor'

?- set_prolog_flag(float_zero_div, nan).
ERROR: Domain error: `flag_value' expected, found `nan'

The problem is that 0/0 is listed here as undefined or invalid,
with some remark that this can mean a value NaN:

An undefined exception can occur with:
0/0 was zero_divisor in 13211-1
These correspond to the cases where IEEE 754 would
produce the INVALID exception and produce a NaN.
http://eclipseclp.org/Specs/core_update_float.html

But SWI-Prolog seems to not follow the undefined appraoch,
otherwise my previous set_prolog_flag(float_undefined, nan)
would have had an effect. But then a float_zero_div is not

specific enough, a float_zero_div_zero flag or a float_invalid flag
would be needed to address 0/0 and solve the problem. One could
then completely follow Joachim Schimpfs proposal?

Strange if I switch off the division by zero error in my library(arithmetic/ratio)
and replace it by a normalization N/0 ~~> sign(N)/0. I can define:

nan(0#0).

And then the following works, without changing anything else in the
my library(arithmetic/ratio):

?- X is nan.
X = 0#0.
?- X is 1 rdiv 2+nan.
X = 0#0.
?- X is 1 rdiv 2-nan.
X = 0#0.
?- X is 1 rdiv 2*nan.
X = 0#0.
?- X is 1 rdiv 2 rdiv nan.
X = 0#0.
?- X is nan rdiv (1 rdiv 2).
X = 0#0.
?- X is 0 rdiv 0.
X = 0#0.

Next testing would be what happens if define this infinity,
inspired by Leonhard Euler:

inf(1#0).

Edit 21.01.2023
Ok here what infinity can do again without chaning the library(arithmetic/ratio), just
using the school book rules for rational arithmetic for Q*, plus the different normalization:

?- X is inf.
X = 1#0.
?- X is -inf.
X = -1#0.
?- X is 1 rdiv 2+inf.
X = 1#0.
?- X is 1 rdiv 2*inf.
X = 1#0.
?- X is 1 rdiv 2-inf.
X = -1#0.
?- X is 1 rdiv 2 rdiv inf.
X = 0.
?- X is inf rdiv (1 rdiv 2).
X = 1#0.
?- X is inf-inf.
X = 0#0.
?- X is inf*inf.
X = 1#0.
?- X is inf rdiv inf.
X = 0#0.

The above is all fine, if we read 1#0 as infinity and 0#0 as NaN again.
What goes wrong is for example:

?- X is inf+inf.
X = 0#0.

It kind of wraps around. The above could be fixed by an
additional rule in rational number addition.

First point is that the only permissable values of the float_zero_div flag are error and infinity as prescribed by the Eclipse document you referenced.

The second point is that rdiv is a rational function and (currently) not subject to the float_* flags. That could be changed but I think you would then be obliged to do the same for all the integer divide functions: \\, div, rem and mod. Having just lost the battle for correct comparison, I have no interest in that fight. Also note that the Eclipse proposal does not mention any of the above functions - its title is " Proposal for Prolog Standard core update wrt floating point arithmetic".

However, I think there is a bug in the SWIP / function: given the float_zero_div is set to infinity, 0/0 evaluates to infinity while 0.0/0 evaluates to NaN; the latter is correct IMO.

I don’t doubt that Q*, and Z*, can be implemented. My only question is whether there’s a tangible benefit for doing so in SWIP, particularly given that the underlying supporting libraries (GMP) only support Q and Z.

You might also be interested in: < https://people.eecs.berkeley.edu/~fateman/papers/extrat.ps >.

You are very talented in creating new FUD and
changing the topic. But you made me curious:

In Joachim Schimpfs link they are mentioned under the heading:

Off-Topic: Integer operations
http://eclipseclp.org/Specs/core_update_float.html

I do not overload these evaluable functions in my
library(arithmetic/ratio), I had such an overloading in the past
but removed it. Neither do I have float implementations

for these evaluable functions. I had float implementations in
the past but I removed them, because they are a little bit brittle,
for floats you can easily construct examples where this here

gets violated, using some low level IEEE rem functions.
Now I am creating some FUD, LoL, now that my implementation
is gone, how do I prove it? Need to find an archived version!

X rem Y =:= X - (X // Y)*Y
X mod Y =:= X - (X div Y)*Y

For rational number you can construct evaluable functions
\\ , div , rem and mod such that the above is still satisfied.
I don’t understand what the obligation should be, and what

the problem should be. You can add \\ , div , rem and mod
to a rational numbers implementation. But what problems do
you see in connnection with nan and inf, just currious?

As you might guess, my nan and inf question aims at some Q*
inspired take on nan and inf, not just making them floats.

Edit 21.01.2023
Luckily my FUD is not FUD, I can prove it. Downloading interpreter_1.5.1.jar
gives me. My speculation the IEEE rem functions tries to gain some precision,
which is for example needed in mathematical functions such as sin/1, cos/1, tan/1

to make a range reduction of the given radian, which cannot be archived by the formula:

?- X = 1000000, A is X mod pi.
X = 1000000, A = 2.78402848654304.

?- X = 1000000, B is X - (X div pi)*pi.
X = 1000000, B = 2.7840284865815192.

?- X = 1000000, B is X - floor(X/pi)*pi.
X = 1000000, B = 2.7840284865815192.

In the above I implemented (mod)/2 via the Java % operator, which can do
float modulo calculation. Mostlikely it calls some IEEE rem implementation. How
is this discrepancy explained? A and B are not identical! The above can be

extremly irritating in longer computations and when massaging formulas.

You can explain what IEEE rem does via the usage
of quadruple precision. I get this result. The decimal128/1
moves the calculation into quadruple precision, without

changing the value of pi, the same value of pi is used.
Well almost, it is converted from binary to decimal but
with higher precision. The float/1 moves the result back

to 64-bit float, i.e. double precision:

?- X = 1000000, P is decimal128(pi), A is X-floor(X/P)*P, C is float(A).
C = 2.78402848654304.

I only have quadruple precision since release 1.5.5 and only
for formerly Jekejeke Prolog. Dogelog Player doesn’t have
them. Before that release the formulas were a hot iron. Maybe

the formulas can now be reconsidered? Cool, the above is an
interesting find! But removing IEEE rem was of course the easier
solution in the past. Maybe in the future refering to quadruple precision

could help the end-user if he needs more precision rem for floats?

Edit 21.01.2023
I also don’t feel so much pressure right now to reintroduce
some modulo computation with floats, it didn’t appear on the
radar of SWI-Prolog yet? I find for example:

?- X = 1000000, A is X mod pi.
ERROR: Type error: `integer' expected, found `3.141592653589793' (a float)

I only tried to make the point that the current rdiv behaviour is consistent with the SWIP implementation of rationals as Q, contrary to what the title of this thread might suggest.

I then basically asked the question if extending Q to Q* (by changing this behaviour) is such a good idea, why isn’t extending Z to Z*? It can be a slippery slope.

You might consider this FUD. I don’t, but I’ll leave it to others to make their own decision.

The title of this thread correctly suggests a discrepancy, whether
you like it or not. Your subjective taste in certain matters will
not eradicate this small discrepancy, which is an objective fact.

You say this because you didn’t get my point that (rdiv)/2 is the equivalent of (/)/2,
only for rational numbers and not for floating point numbers. And that therefore
we could apply Joachim Schimpfs list to it. Obviously (rdiv)/2 is mathematically correct
realization of the real number / : R x R \ {0} → R, only the subset Q x Q \ {0}.

I didn’t go into Q* per se, when asking about {0}. But I read from Joachim
Schimpfs that 0 rdiv 0 could be NaN. Similar like 0.0/0.0 could be NaN.
You also blamed Joachim Schimpfs table. This is also FUD, what is your proof
exactly? I don’t think SWI-Prolog flags, as adopted from Joachim Schimpfs

table, have any defect for the ordinary (/)/2. Lets try this:

/* SWI-Prolog 9.1.2 default flags, windows 10 platform  */
?- X is 0.0/0.0.
ERROR: Arithmetic: evaluation error: `undefined'

?- set_prolog_flag(float_undefined, nan).
true.

?- X is 0.0/0.0.
X = 1.5NaN.

The above works fine, Joachim Schimpfs table is enough to
get NaN. How can I archive the same for 0 rdiv 0 ? Its currently
not possible since the rational arithmetic maps this to zero division
instead to undefined.

Edit 21.01.2023
Maybe I didn’t have this analysis at hand in my previous posts. But
there is now more evidence, that 0 rdiv 0 is not viewed as giving
undefined in SWI-Prolog whereas 0.0/0.0 is viewed as giving undefined,
another case not covered by ISO core standard, and listed by

Joachim Schimpf as an amendment that should be submitted
to the ISO core standard. You can try, when you start SWI-Prolog fresh,
you get with the default flags:

/* SWI-Prolog 9.1.2 default flags, windows 10 platform  */
/* Two different errors */
?- catch(X is 0.0/0.0, error(E,_), true).
E = evaluation_error(undefined).

?- catch(X is 1.0/0.0, error(E,_), true).
E = evaluation_error(zero_divisor).

Two different errors as suggested by Joachim Schimpf. On the
other hand (rdiv)/2 doesn’t do the same, it only gives one error:

/* SWI-Prolog 9.1.2 default flags, windows 10 platform */
/* Twice the same error */
?- catch(X is 0 rdiv 0, error(E,_), true).
E = evaluation_error(zero_divisor).

?- catch(X is 1 rdiv 0, error(E,_), true).
E = evaluation_error(zero_divisor).

Hope I could clarify my point, its not some FUD, but a serious question!

Whats the dating of this behaviour in SWI-Prolog? Its difficult to find
among other Prolog systems and other programming languages, so
far Trealla Prolog and Scryer Prolog only show me:

/* Trealla Prolog 2.7.34 and Scryer Prolog 0.9.1-70-g5e0e3e27 */
?- catch(X is 0.0/0.0, error(E,_), true).
   E = evaluation_error(zero_divisor).

?- catch(X is 1.0/0.0, error(E,_), true).
   E = evaluation_error(zero_divisor).

And Python shows me:

/* CPython 3.11 */
>>> 0/0
ZeroDivisionError: division by zero

>>> 1/0
ZeroDivisionError: division by zero

On the other hand Ciao Prolog shows me,
the negative NaN could be an error maybe:

/* Ciao Prolog 1.22.0 */
?- X is 0.0/0.0.
X = -0.Nan

?- X is 1.0/0.0.
X = 0.Inf

And JavaScript shows me:

/* Chrome 109.0.5414.75 */
console.log(0/0);
> NaN

console.log(1/0);
> Infinity

rdiv/2 was introduced as a quick way to introduce rational numbers without introducing a new data type. It is there only for this reason and should be deprecated now that we have atomic rational numbers. Surely it should do nothing with floats.

It has nothing to do with rationals and the operator (rdiv)/2. SWI-Prolog
has also a discrepancy in error messages for its integers and the
operator (/)/2. With integers I get only one error:

/* SWI-Prolog 9.1.2 default flags, windows 10 platform */
/* Twice the same error */
?- catch(X is 0/0, error(E,_), true).
E = evaluation_error(zero_divisor).

?- catch(X is 1/0, error(E,_), true).
E = evaluation_error(zero_divisor).

With floats I get two errors:

/* SWI-Prolog 9.1.2 default flags, windows 10 platform  */
/* Two different errors */
?- catch(X is 0.0/0.0, error(E,_), true).
E = evaluation_error(undefined).

?- catch(X is 1.0/0.0, error(E,_), true).
E = evaluation_error(zero_divisor).

Its mainly a behaviour of float/float, which is primarily not in sync
with integer/integer, and secondarily also not in sync with rational/rational
and rational rdiv rational.

Edit 22.01.2023
To be precise the tipping point is already integer/float and float/integer:

?- member(A, [0,0.0]), member(B, [0,0.0]), 
    catch(X is A/B, error(E, _), true), write(A/B=E), nl, fail; true.
0/0=evaluation_error(zero_divisor)
0/0.0=evaluation_error(undefined)
0.0/0=evaluation_error(undefined)
0.0/0.0=evaluation_error(undefined)
true.

You get the same tipping point with prefer_rationals=true:

?- set_prolog_flag(prefer_rationals, true).
true.

?- member(A, [0,0.0]), member(B, [0,0.0]), 
    catch(X is A/B, error(E, _), true), write(A/B=E), nl, fail; true.
0/0=evaluation_error(zero_divisor)
0/0.0=evaluation_error(undefined)
0.0/0=evaluation_error(undefined)
0.0/0.0=evaluation_error(undefined)
true.

I definitely think there is an issue in all the “noise”. @j4n_bur53 correctly points out that 0 rdiv 0 generates a zero_divisor error but it really should generate an undefined error.

0/0 has the same problem, as I pointed out elsewhere. This is a bigger issue IMO because it is a direct violation of the Eclipse proposal. And combined with the fact that it is subject to to the float_* IEEE continuation flags can actually produce erroneous results. (If either argument is 0.0, it works correctly.)

Also 0 div 0, 0//0, 0 rem 0 and 0 mod 0 all have the same issue as 0 rdiv 0, incorrect error code.

Regardless of the status of rdiv, these should all be fixed IMO.

I have no opinion yet on that, how to fix it if at all. For example
I noticed that JavaScript is also a little bit incoherent in
this respect. For example I get:

console.log(0/0);

try {
  console.log(0n/0n);
} catch (e) {
  console.log(e);
}

Gives me:

> NaN
> RangeError: Division by zero

So its the age old questions, should a Prolog system create a new
reality with its own rules about numbers, or should it simply delegate
to the underlying host programming language runtime system,

and pass downstream what it gets from upstream?

Edit 22.01.2023
Now I am comparing apples with oranges, since (/)/2 among the
new BigInt from JavaScript is (//)/2 and not (/)/2. And (//)/2 was
declared off-topic by myself. The (/)/2 is indeed (//)/2. This here:

console.log(1000n / 3n);
console.log(-1000n / 3n);

Gives me:

> 333n
> -333n

But still interesting to gather some evidence, about the case 0 / 0,
and we might also look into 0 // 0, especially since we might be interested
in identity laws such as:

X // Y = truncate(X rdiv Y)
X div Y = floor(X rdiv Y)

So 0 // 0 and 0 div 0 could be derived from 0 rdiv 0 as it seems
@ridgeworks insists? What is this insistence based on? We might
indeed want that the special values and/or exceptions are related.

Although true, I don’t think I said (or insisted) anything like that, did I? Now in this case they are all undefined so their behaviour should be consistently reflect that.

As a consequence, if any functions result in a continuation value, rather than an error, any continuation values for equivalent arguments should be arithmetically equivalent. For example, 0.0/0.0 =:= 0/0; currently that’s not true.

Why do we care about the exception? ECLiPSe simply seems to generate an abort, but possibly mine is too old (6.1). The ISO standard demands for an evaluation_error(zero_divisor) exception. Shouldn’t we simply consider this to be subsumed by evaluation_error(undefined)? Does it matter much? Isn’t the only thing that really matters what happens if we set the float flags?

This, I think is wrong because integer division acts as float division if the integers to not perfectly divide or prefer_rationals is enabled.

?- set_prolog_flag(float_undefined, nan).
true.

?- A is 0 / 0.0.
A = 1.5NaN

?- A is 0 / 0.
ERROR: Arithmetic: evaluation error: `zero_divisor'.

Now this is a bit inconsistent as we have the flag float_zero_div that can be infinity, but clearly 0/0[.0] should not be infinity. I can live with that.

A bit more troublesome is how to deal with 0/0 if prefer_rationals is enabled. We do not have a rational equivalent to NaN, and generating a float is also not what I want. An exception seems the only way out. We could make an exception if automatic conversion of rational to float for large rationals is also enabled as this means the application is prepared to get either a rational or float result.

JavaScript is quite a bit incoherent in a lot of respects; and its design decisions shouldn’t be used as a paradigm for anything.

For some examples of JavaScript being incoherent:
Wat (starting at 1:25)

1995 - Brendan Eich reads up on every mistake ever made in designing a programming language, invents a few more, and creates LiveScript. Later, in an effort to cash in on the popularity of Java the language is renamed JavaScript. …
A Brief, Incomplete, and Mostly Wrong History of Programming Languages

I think there are cases, e.g., implementing extended rationals at the Prolog level, where you want to distinguish between the two cases, e.g., to convert undefined errors to a custom NaN value and zero_divisor to a custom infinity value, in an error handler. Also the Eclipse proposal (which makes a lot of sense to me) does define separate errors for these two cases, and it is consistent with IEEE behaviour.

I avoid this issue by thinking of the floating point special values as polymorphic values, even if they are implemented as floats. This all seems to work fine in mixed mode expressions (`+,-,*,/,**,elementary functions, …), i.e., generating a “float” for these special cases is exactly what I want.

The only downside is that the builtin numeric type tests reflect the underlying implementation, but that’s much more preferable than generating exceptions in expression evaluation that have to be avoided a priori or dealt with in an exception handler.

That leaves type specific functions as an open question - should they be allowed to return a polymorphic infinity or nan? I kind of think they should, but don’t really have any compelling use cases. Maybe these functions should even accept a polymorphic infinity as an argument, e.g., 0 is 1 div inf.

So the status quo has some issues at the intellectual level, but practically it seems to work fine (for me).

Well this seems to be a two-way street. Maybe it’s a language issue, but “should be” does mean “insist” to me.

In any case, I’m more than happy to stop discussing this whole thing. If you think there’s a problem raise an issue on Github.

There is a strange effect, that NaN make the less than
or equal not anymore being a total order, i.e. it is
not anymore assured that X =< Y or Y =< X:

/* SWI-Prolog 9.1.2 */
?- nan =< 1.0.
false.

?- 1.0 =< nan.
false.

Did ROK ever write about that? Exact comparison would
gain transitivity. But continuation values, especially the value
NaN, destroys totality.

Edit 24.01.2023
Annoying side effect, this bootstrapping of min/2 and max/2
evaluable functions, does neither reproduce missing value
semantics nor propagation semantics.

min_(X, Y, Z) :- X =< Y, !, Z = X.
min_(_, Y, Y).

max_(X, Y, Z) :- Y =< X, !, Z = X.
max_(_, Y, Y).

You can try this here:

/* SWI-Prolog 9.1.2 */
/* here it behaves like missing value */
?- X is nan, min_(X, 1.0, Y).
Y = 1.0.

/* here it behaves like propagate value */
?- X is nan, min_(1.0, X, Y).
Y = 1.5NaN.

What I couldn’t figure out yet, whether modern programming
languages such as Python or JavaScript have some distinction
between things like quite-nan and non-quite-nan.

On the other hand, their less than or equal also looses
totality, the order relation is not anymore total, which is
again in accordance to some mathematical logic

approaches to free logics. In JavaScript I get:

console.log(NaN<=1.0);
> false

console.log(1.0<=NaN);
> false

In Python I get:

>>> float('nan') <= 1.0
False

>>> 1.0 <= float('nan')
False

In as far there is agreement with SWI-Prolog.

Edit 25.01.2023
What are free logics? Basically logics with terms that
are non-denoting, which poses some problems since
terms are not anymore always single valued,

a value might be absent:

Free Logic - Stanford Encyclopedia of Philosophy
Classical logic requires each singular term to denote
an object in the domain of quantification—which is usually
understood as the set of “existing” objects. Free logic does not.
https://plato.stanford.edu/entries/logic-free/

They have some suggestion how predicates should behave
in the context of partial functions. See section 3. Semantics,
4.1 Problems with Primitive Predicates, 5.2 Logics with

Partial or Non-Strict Functions.

IEEE has a standard for how NaN behaves. They put lots of thought into the matter. In general we should just follow them. If we have a rationalNan it should behave like float NaN. Whether the two are the same value or not doesn’t concern me. As far as I am concerned they can both be Bottom.

Most often well-made programs don’t encounter NaN. Library functions that encounter NaN due to mismanaged arguments usually would yield NaN rather than a number, so the poorly made program is aware of the problem and can be changed. If it wasn’t for the fact of interop with other languages and the processor, fail/0 would be appropriate, but we have a standard.