I keep wondering why you often delete messages that do make sense. It is not really on my shortlist, but introducing rational infinity might also simplify dealing with the range boundaries in clp(fd).
I think rational(inf) being âundefinedâ means unsupported in SWIP. SWIP rationals are GMP rationals (type mpq_t) which, as far as I know, implement Q, not Q*. Similarly SWIP integers are a combination of C 64 bit integers and GMP unbounded integers (type mpz_t) which collectively implement Z, not Z*. Z is a subset of Q. SWIP F* is defined by the IEEE floating point standard.
So the set of SWIP numbers is currently the union of F*, Q and Z, lets call it As* for lack of a better name and includes two infinity values. (Interestingly, it also includes nan which is a member of F*. ) As* is a subset of R*. So the challenge is how to define the semantics of all the arithmetic functions over As*. Surely a few things have fallen through the cracks. Perhaps the value of rational(inf) should be 1.0Inf by analogy, since the value of integer(inf) and float(inf) are also 1.0Inf. There are also many functions which are restricted to subsets of As* and generate errors otherwise, e.g., rdiv only accepts rational arguments (members of Q), div only accepts integer arguments (members of Z), etc.
Iâm not sure what the value is of adding more representations of infinity; it is what it is, a number larger than any finite value. The fact that its concrete internal representation is the IEEE infinity is an implementation detail; it was just the first one to the party and it is supported in hardware. F* defines a largest value less than infinity is because F is a finite set. For the same reason arithmetic on F* has rounding errors. (Overflow is just a special case of a rounding error.) Q and Z are infinite sets so they donât define a largest finite value and donât have rounding error issues. (They do have the practical issue of managing their memory size and performance.)
So thatâs my âbrainwritingâ on current SWIP arithmetic. Holistically I think it all hangs together but Iâm sure there are many debatable details. And as there is no accepted standard for extended arithmetics, there are probably many ways to skin this cat.
I wonât repeat myself, but suffice it to say I think the ISO standard and, by extension, most Prologâs are wrong-headed. See CommonLisp and Python for two examples of languages that got it right (IMO). And the Eclipse proposal recognizes this, but really doesnât go far enough (IMO).
Unless the domains of floats and integers are strictly defined, any comparison that is dependent on a conversion to a float is subject to rounding errors, of which overflow is a special case. But we apparently canât fix Prolog because of this historical âbaggageâ, so itâs necessary to introduce new functions to achieve parity with CommonLisp. Additional errors can be used to catch some of the inconsistencies, but that isnât close to solving the real problem (IMO).
I had a look at that. I first thought this was wrong, but I think it is ok. The code path occurs if one side of the comparison is a float and the other is a rational (or integer). In this case, the rational is converted to float and may thus end up as +/-Inf. Comparing infinity to a normal float gets the expected result. If the extended float flags are enabled the normal IEEE float rules apply, so that is fine too.
@ridgeworks , I pushed a patch for 0/0 to result in NaN if float_undefined is set to nan. See c92083831b0a6d63adf211292484a24653fa9d4e. This first resulted in NaN for 0 ^ -1, which should IMO still result in a division-by-zero error. That is because you passed the sign of the exponent to check_zero_div(), while I think that should pass 1 as N ^ -I is 1 / (N^I). Agree?
Canât find everything back. Somewhere you claimed the exception may be used to fill in the right result. I doubt that is true. From the exception you do not know whether the extended float that would result is -Inf, +Inf or NaN. Using undefined for 0/0 you know you must use NaN, but in general you still donât know whether you have positive or negative infinity in the case of division by zero, so t still is not enough. Note that exceptions typically do not tell you the expression involved. If you want extended floats you must set the flags.
Yes, your patch looks good to me. 0^-1 should still generate a division-by-zero error as currently done:
?- X is 0** -1.
X = 1.0Inf.
?- X is -0.0** -1.
X = -1.0Inf.
?- X is 0.0** -1.
X = 1.0Inf.
Since the specific check_zero_div in question is specifically for 0 case, only a positive infinity can result.
When the scope of the catch predicate contains the expression, you can do useful things, e.g.,
catch(R is Exp, error(evaluation_error(Err)), fix_result(Err,R is Exp))
I remember doing something like this in pre-IEEE days to implement explicit rounding which could potentially overflow.
Another cry in the wilderness? Excerpt from the end of his discussion on numeric comparison, Richard OâKeefe, 2015 (http://www.cs.otago.ac.nz/staffpriv/ok/pllib.htm):
It is not possible to write a sorting predicate that correctly sorts mixed integer/float lists in ISO Prolog as it stands.
The change required has the effect of producing correct answers in more cases. Existing systems already have to be incompatible with the ISO Prolog standardâs errors noted above if they are to give sensible results, so we should not let backwards compatibility with the standard bind us here. Mistakes really should not be forever.
(Emphasis from the original.)
Just for the record, no action required.
True. We do not live in the pre-IEEE era though and you can still âfixâ the result (be it that you might need an extra test). ISO demands zero_divisor, 0/0 divides by zero and raising zero_divisor, leads to a better error message. The only reason for undefined I can think of is that in all such cases the IEEE float result is NaN. Even that is not true on more complex expressions:
?- A is max(0 / 0, 1.0).
ERROR: Arithmetic: evaluation error: `zero_divisor'
While using float_undefined = nan, the result is 1.0.
That is true; having a separate error result just makes it a bit easier.
The other reason is that 0/0 (and equivalents) is actually not mathematically defined in the relevant algebraic field of interest (Z*, Q*, R*). Thatâs why IEEE arithmetic defines it this way for floats. When it comes to errors itâs not so significant because they effectively abort the computation; itâs left to the programmer to pick up the pieces.
True, because the max of NaN any any other number should be that number; see Eclipse proposal for floats. I donât see why that shouldnât be extended to other internal numeric types independent of how NaN is implemented.
According to the agreed compromise (with @jan) , correct comparisons are confined to the new arithmetic functions cmpr, maxr, and minr. Standard arithmetic comparison has not been changed and, by extension, neither has term comparison. The intent is that unless you use the new functions, nothing changes.
Since theyâre arithmetic functions, a user would have to write a predicate for use with predsort/3; that shouldnât be difficult.
So I think the only issue you might have is if thereâs a conflict between the chosen function names and any predicates of arity 3 defined by your system.
We could have compare/3 do some of the job. Unlike ISO, which sorts all floats before all integers, SWI-Prolog orders numbers by value based on float comparison and puts the float first in case of a tie. Note that the most relevant property of compare/3 is to define a complete ordering of terms that only considers terms equal if they satisfy ==/2, i.e., are exactly the same. So, we could change his to perform the rational number based comparison and then (still) in case of a tie put the float first. I.e., 0.5 @< 1r2. That provides âcorrectâ numerical ordered results from sort/2, although you need another pass to remove numerically equivalent pairs.
Yes, the semantics of (quiet) NaNâs in min/max seems to be split into two camps: treat NaNâs as missing data or propagate them.
IEEE specifies missing data (IEEE 754-2008 revision - Wikipedia):
In order to support operations such as windowing in which a NaN input should be quietly replaced with one of the end points, min and max are defined to select a number, x, in preference to a quiet NaN:
min(x,qNaN) = min(qNaN,x) = xmax(x,qNaN) = max(qNaN,x) = xThese functions are called minNum and maxNum to indicate their preference for a number over a quiet NaN.
(Signalling NaNâs complicate the issue so the latest revision clarifies this further.)
C and Rust are examples that are in the âmissing dataâ camp. I think Java is in the âpropagateâ camp.
I havenât seen any compelling arguments one way or the other; missing data seems to work well for my purposes.
No objection, but wasnât this a major argument against making correct comparison the default behaviour? Am I missing something?
Is that any different than the status quo? (Some existing comparisons actually are numerically correct.)
My objection is about numerical comparison, e.g., =:=/2, </2, etc. compare/3 is about standard order. Different terms can never compare equal here, so if we have a float and a rational we have to put one before the other. Currently it simply puts the float first, but I see no objection to first see whether rational comparison can order them before placing the float first.
Yes. Now, if you want your correctly ordered list youâll have to do quite a bit of work. Then you can use sort and walk over the list and remove either the first of the last of a tuple if cmpr says they are equal (depending on whether you want to keep the floats or the rationals). Note there are never three equal numbers as we have only two types that may be numerical equal.
My preference would go to propagating. For one, it is consistent with the default exception based semantics (which are your signalling NaNs). Second, a non-NaN answer means a sensible computation took place. Otherwise you can an answer, but you cannot trust it. A Prolog flag ![]()
OK, so thereâs no objection if the numerical comparison used by compare/3, and by implication sort/2, is correct. This does mean that the result of a sort will be (subtly) different going forward if this change is effected, i.e., you have changed the âStandard Order of Termsâ which currently states:
- Numbers are compared by value. Mixed integer/float are compared as floats. If the comparison is equal, the float is considered the smaller value. If the Prolog flag iso is defined, all floating point numbers precede all integers.
(In any case that should probably say âMixed rational/floatâ.)
And that does mean that that the numerical comparison for sorting numbers as terms is potentially different than the arithmetic compare predicates which, I think, is a bit unfortunate. Now for the vast majority of use-cases (integers less than 53 bits, and could be extended to 64 bits), there is no difference, i.e., converting integers to floats is perfectly correct. I donât wish to reopen this can of worms, but it would be a whole lot simpler and user friendly (IMO) if correct comparison was extended to the domains of big integers and rational fractions. Itâs just hard for me to imagine a credible program that depends on incorrect comparison under these circumstances.
And my preference is âmissing valueâ as defined by IEEE and C, the underlying implementation language of SWIP. (also Eclipse?) I confess that I donât completely see the rationale of IEEEâs definition of max/min with respect to NaNâs but itâs persisted for 35 years and nobodyâs come up with a compelling argument against it, although thereâs obviously less than full agreement. But itâs not hard to find discussions on the Web justifying/rationalizing the âmissing valueâ interpretation.
If a user decides to use max/min, however theyâre defined, itâs up to them to decide what a âsensible answerâ is. From a practical standpoint, I donât see a problem with using min/max to eliminate non-sensical answers in a computation, i.e., given two values I want to prefer an actual number rather than something that is not a number.
I donât think there is any obvious right answer here; the only wrong answer IMO is another Prolog flag.
I meant Java. I got it from a table in this doc https://grouper.ieee.org/groups/msc/ANSI_IEEE-Std-754-2019/background/minNum_maxNum_Removal_Demotion_v3.pdf.
Even though theyâre represented by an IEEE floating point value, NaNâs are not really numbers so they violate most arithmetic (and logical) properties as you described. I believe ROK calls them âunorderedâ numbers and IEEE defines their behaviour which is usually honoured in programming languages supporting the standard.
As of the 2019 version of IEEE-754, the old single definition of minNum/maxNum has been replaced by two new versions, a âpropagateâ version and a âmissing valueâ version, so I assume languages are free to implement either or both. (Not sure if these are required or just recommended.) I did see a C discussion indicating that they were just going to change the documentation for fmin and fmax to say they implemented the âmissing valueâ functions. Thatâs undoubtedly a lot easier for them than adding new functions.
The new propagate version seems to be minimum and maximum.
The new missing value version seems to be minimumNumber and maximumNumber.
For SWI-Prolog, from https://www.swi-prolog.org/pldoc/doc_for?object=float_class/2:
nan
Float is âNot a numberâ'. See nan/0. May be produced if the Prolog flag float_undefined is set tonan. Although IEEE 754 allows NaN to carry a payload and have a sign, SWI-Prolog has only a single NaN values. Note that two NaN terms compare equal in the standard order of terms (==/2, etc.), they compare non-equal for arithmetic (=:=/2, etc.).
so only one NaN value supported; literal value is 1.5NaN. All C-level NaNâs are mapped to this single Prolog value.
I believe by design, there is no support for accessing the bit-level float representation. If you want to do that, you have to resort to some kind of C extension.