Rounding transcendentals broken on MacOS (Intel)

SWI-Prolog version 8.4.1 on MacOS 10.13

One example (of many possible):

?- Z is sin(pi/2).
Z = 1.0.

?- X=pi/2, Zl is roundtoward(sin(X),to_negative),Zh is roundtoward(sin(X),to_positive), DZ is Zh-Zl.
X = pi/2,
Zl = 0.9999999999999999,
Zh = 1.000000000251173,
DZ = 2.5117319335521415e-10.

Clearly rounding to_positive is off by many decimal digits. Same last query on SWISH (threaded, 64 bits, version 8.5.8-150-g3e63163b9):

DZ = 0.0,
X = pi/2,
Zh = Zl, Zl = 1.0

I conclude the problem is not in SWIP code, but with the linked C math library. Donā€™t know enough to diagnose this further or what remedies might exist.

Most likely. On the m1 (arm64) I get this. Still a little conservative as sin() is pretty stable near Ļ€/2. I assume most C libraries use the FPU to make the computation. I do not know enough about the details, e.g., who does/what controls the rounding, etc.

?- X=pi/2, Zl is roundtoward(sin(X),to_negative),Zh is roundtoward(sin(X),to_positive), DZ is Zh-Zl.
X = pi/2,
Zl = 0.9999999999999999,
Zh = 1.0,
DZ = 1.1102230246251565e-16.

Iā€™m in the same boat.

The arm64 result would be acceptable, to the extent that any math library guarantees any precision. At least Zl =< Zh and the difference is on the order of a ulp. On the Intel side, itā€™s hard for me to fathom how a rounding mode could cause such a big error - maybe a calculation to generate an index to a table of values?

One would assume (perhaps incorrectly) that the XCode libraries for the two targets are built from the same source, which might indicate itā€™s a compiler (optimization?) problem, but Iā€™m grasping at straws.

My conclusion is that the problem is unlikely to be either the hardware or the SWIP software, which means either a bug in the specific library source or an issue with the toolset used to build it. It would be nice to be able to diagnose the problem a little further, but in the meantime Iā€™ll just avoid using a rounding mode other than the default with transcendental functions.

I must admit I ā€œassumedā€ the transcendentals were done in software, but after digging a bit, the answer seems to be ā€œit dependsā€ - see How does C compute sin() and other math functions? - Stack Overflow.

If you follow one of the links you get to the GNU C library source (I think?) for ia64 sin/cos

https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/ia64/fpu/s_cos.S;h=cc79aca17edc8baf2ca78c96f053c51f59edcc26;hb=HEAD

Donā€™t quite know what to make of it, but it definitely doesnā€™t look like hardware; maybe FPU emulation. So still software or toolset IMHO.

I also see problems under Windows, e.g., with the following unit test (most recent ā€œdaily buildā€ for windows, 64 bit). Note that the size of a long int is 4 in MinGW, so lround is invoked in pl-arith.c, not llround.

This is the unit-test from swipl:core

?- X is nexttoward(0.5, -10), N is round(X).
X = 0.49999999999999994,
N = 1.

This causes the unit test in swipl:core to fail under Windows. In ubuntu, N=0, and the test passes.

If we move on one step, the result is correct:

?- X is nexttoward(nexttoward(0.5, -10), -10), N is round(X).
X = 0.4999999999999999,
N = 0.

In my naive understanding of floating point arithmetics, 0.5 is just somewhere around 0.5, so stepping towards -10 may bring it to a number that is still in the middle of 0 and 1. In the docs for lround, they say that lround doesnā€™t care about the rounding mode anyway.

https://man7.org/linux/man-pages/man3/lround.3.html

ā€œThese functions round their argument to the nearest integer value, rounding halfway cases away from zero, regardless of the current rounding direction (see fenv(3)).ā€

I see that it says ā€œrounding halfway cases away from zeroā€, but the stronger statement is that it doesnā€™t care about the rounding direction.

I see two options:

  • accept that floating point math is a C thing, and remove the unit test (since it does not test a prolog feature, but a C feature)
  • Fix the prolog code in some way that it does not rely 100% on lround.

AFAIK doubles do precisely express 0.5, as it is 1 * 2^{-1}. This indicates a bug in lround() on this platform and as a result a bug in SWI-Prolog on this platform. I already reported that in Another floating point casualty: round/integer arithmetic functions - #9 by jan.

Ideally MinGW should fix this. It was reported long ago together with a patch :frowning: Alternatively we leave the test failing and possibly add a bug note to the round function. Finally we can provide our own correct lround for systems where it is broken.

Dubious things happen. I just pasted some fprintfs into pl-arith.h

#if SIZEOF_LONG == 8
        r->value.i = lround(n1->value.f);
#else
        double d = nexttoward(0.5, -10) ;
        fprintf(stderr, "d - f = %lf\n", d - n1->value.f) ;
        fprintf(stderr, "d == f: %d\n", d == n1->value.f) ;
        fprintf(stderr, "d  = %.20lf\n", d) ;
        fprintf(stderr, "f  = %.20lf\n", n1->value.f) ;
        fprintf(stderr, "rd  = %Ld\n", llround(d)) ;
        fprintf(stderr, "rf  = %Ld\n", llround(n1->value.f)) ;

        r->value.i = llround(n1->value.f);
#endif

Now, running swipl with X is nexttoward(0.5, -10), N is round(X)., I get:

$ swipl
Welcome to SWI-Prolog (threaded, 64 bits, version 8.5.9-6-g9dc763bc2-DIRTY)
SWI-Prolog comes with ABSOLUTELY NO WARRANTY. This is free software.
Please run ?- license. for legal details.

For online help and background, visit https://www.swi-prolog.org
For built-in help, use ?- help(Topic). or ?- apropos(Word).

X is nexttoward(0.5, -10), N is round(X).
X = 0.49999999999999994,
N = 1.


d - f = 0,000000
d == f: 1
d  = 0,49999999999999994449
f  = 0,49999999999999994449
rd  = 0
rf  = 1

So, d == f, but llround returns different ints for d and f.

Any ideas?

Call in the experts? Iā€™m not. One of the possible reasons is that AFAIK the floating point registers are wider than 64 bit (80?) As a result, computation may yield different results after storing and retrieving values to/from memory (which stores them as 64 bits) or the compiler decides to keep the value in memory. Floats are too hard for ordinary people :frowning: That is, if you want to understand their corner cases and limits. As long as you use them to reason about physical things, stay far away from the smallest and largest floats and do not expect the least significant digits to be correct, they are just fine.

Iā€™d leave this alone if I were you. The best way to fix this is probably to lookup the MinGW bug report mentioned in my post and provide our own llround() based on that. Before you do so, youā€™ll find quite a couple more bugs in the IEEE754 tests by @ridgeworks. It appears that the math functions by MinGW and/or the math library that comes with it (or does it use that from Windows?) gets a lot of details wrong :frowning:

edit looked up both MinGW and glibc (GNU libc) sources. MinGW uses something similar as what I did before moving to llround(). glibc does some bit fiddling on the double value. glibc code is LGPL though, so we cannot easily use that.

If you have suitable float parts, you can implement round/1 evaluable
from float parts. Unfortunately SWI-Prolog returns the float parts with
the mantissa as a float itself:

/* SWI-Prolog 8.5.8 */
?- float_parts(0.4, M, _, E).
M = 0.8,
E = -1.

But you can also define float parts so that the mantissa is returned
as an interger. Here is what such a float parts gives:

/* Jekejeke Prolog 1.5.1 */
?- sys_float_mantissa(0.4, M).
M = 7205759403792794.
?- sys_float_exponent(0.4, E).
E = -54.

Now you can define your own round/1 evaluable function, inspired
how Math.round() from Java does it. This shows only the sunshine
case for round/1, when E and M are in a suitable range:

round2(X,Y) :-
    sys_float_mantissa(X, M),
    sys_float_exponent(X, E),
    Y is ((M >> (-E-1)) + 1) >> 1.

?- round2(23.4, X).
X = 23.
?- round2(24.6, X).
X = 25.

And for the nextto numbers:

?- 0.49999999999999994 == 0.4999999999999999.
fail.
?- round2(0.49999999999999994, X).
X = 0.
?- round2(0.4999999999999999, X).
X = 0.
1 Like

Is probably what glibc also uses. Iā€™m not too fond of exposing representation details of floats. Even doing this is C is not easy as only some systems provide a header file that allows you to fetch the sign, exponent and mantissa of the double (ieee754.h) as integers.

Hopefully the patch by @mgondan1 holds. Otherwise we probably have to go this route or accept some systems got it wrong :frowning:

I see. Playing around with -O also changes the result.

$ cat round.c
#include <stdio.h>
#include <math.h>

int main()
{ double d = 0.5 ;
  double d_ = nexttoward(d, -10.0) ;
  long long llrd = llround(d_) ;
  printf("llround(d_) = %Ld\n", llrd) ;
  return 0 ;
}

$ gcc -Wall -O0 round.c
$ ./a.exe
llround(d_) = 1

$ gcc -Wall -O1 round.c
$ ./a.exe
llround(d_) = 0

The question is: Why is such an unfriendly thing tested in swipl:core :slight_smile:

:slight_smile: Well, it mostly hints at MinGW (or, more precisely its C runtime library) not being very standard compliant. If I try the same on Ubuntu using gcc and glibc I get a consistent 0 as answer.

SWI-Prolog makes a promise on how it rounds floats to integers. We should either make sure we keep to this promise or document it as a platform specific limitation.

For most applications it doesnā€™t matter how rounding floats very close to x.5 happens. For some, it does. If you want to ignore such test, just place

:- if(\+ current_prolog_flag(windows,true)).
...
:- endif.

around them and weā€™ll add a remark to the arithmetic section that some promises wrt floats are not satisfied on Windows (I think all are rounding issues).

Agreed; this is the key issue. Key points:

  1. If round(nexttoward(0.5, -inf)) equals 1, itā€™s a bug. In fact for any finite float F=N+0.5 where N is a non-negative integer, round(nexttoward(F, -inf)) < F. If thatā€™s not true for some platform it should be documented.

  2. Compiler optimization settings should never change the the value of a result. Sounds like this may be the issue with MinGW, not a bug in the math library code.

  3. This problem is independent of IEEE rounding modes, i.e., the rounding mode, as set by the user, should not affect the result of llround (as documented). (Rounding modes should only affect the direction of rounding of a high precision value to a lower precision value, e.g., an 80 bit floating point value in a hardware register to a 64 bit double.)

Also note that this thread is about using IEEE rounding modes with transcendental functions on MacOS. And the observed result is not off by a couple ulpā€™s but by several (6) decimal places.

But the resolution may be similar in that itā€™s a platform specific ā€œbroken promiseā€ rather than something that can be fixed in SWIP. (It may also be caused by a defective optimization by a compiler; that has yet to be determined.)

Elegant! One should probably only use this for values that are within the 64 bit int range. That is fine because above the integer range the resolution of floats is so low that it cannot represent the Ā± 0.5 anyway.

I do agree with your points. I doubt C compilers make this promise about optimization. I have started a section on how we deal with floating point arithmetic. That can be extended with concrete information for specific platforms.

Anyone who cares about these things should help extending the test suite and/or provide patches to improve the implementation. Iā€™m happy to coordinate handling PRs.

So hereā€™s a somewhat extreme solution but I think it covers a lot of the bases. ā€œcrlibmā€ is:

a portable, proven, correctly rounded, and efficient mathematical library (libm) for double precision. ā€¦ The ultimate goal of the crlibm project is to push towards the standardization of correctly-rounded elementary functions.

(reference: https://hal-ens-lyon.archives-ouvertes.fr/ensl-01529804/file/crlibm.pdf contains way more than you want to know but first chapter or two is useful background).

Key attributes:

  1. correctly rounded - i.e., ā€œas perfect as can beā€
  2. portable - crlibm is written in C and will be compiled by any compiler fulfilling basic requirements of the ISO/IEC 9899:1999 (C99) standard.
  3. proven - crlibm intends to provide a comprehensive proof of the theoretical possibility of correct rounding, the algorithms used, and the implementation, assuming C99 and IEEE-754 compliance.
  4. efficient - performance and resource usage of crlibm should be comparable to existing libm implementations, both in average and in the worst case.

A corollary is that given C99 and IEEE-754 compliance, the results are the same on all platforms; a useful property for composing test cases. And the need for floating point ā€œdisclaimersā€ is minimized, if required at all.

crlibm is an INRIA sponsored project and has been around for over 15 years. I found additional language bindings for Python (among the top 5% of packages on PyPi), Julia, and Rust with a quick search. The main argument against using it as a general replacement for libm is that it only supports doubles, but thatā€™s all thatā€™s required for our purposes.

crlibm is not a replacement library for libm; rather it provides a set of four functions (1 for each rounding direction) for each double elementary function (e.g., exp, sin, etc.) - see https://github.com/taschini/crlibm/blob/eb3063791aa75bc9705b49283bf14250465220a7/crlibm.h. So one attractive possibility is to make crlibm an optional library in SWIP, analogous to the GMP library for rationals. To take advantage of its capabilities would require changes to the various ā€œar_ā€ elementary functions in pl_arith.c, but this could largely be done with a macro to replace the existing calls to ā€˜libmā€™. Iā€™m not a C programmer but I assume it would look something like:

#define CALL_ELEM_FUNCTION1(func,n,r) \
  #ifdef CRLIBM \
	int rounding = fegetround(); \
	if (rounding == FE_TONEAREST) \
	  r->value.f = func_rn(n->value.f); \
	else {
	  fesetround(FE_TONEAREST); \
	  switch(rounding) {  \
	    FE_UPWARD     : r->value.f = func_ru(n->value.f); \
	    FE_DOWNWARD   : r->value.f = func_rd(n->value.f); \
	    FE_TOWARDZERO : r->value.f = func_rz(n->value.f); \
	  } \
	  fesetround(rounding); \
	} \
  #else \
	r->value.f = func(n->value.f); \
  #endif

If the flag CRLIBM is defined, this just calls the version of the function corresponding to the current rounding mode after setting the rounding mode to FE_TONEAREST (expected by crlibm) and afterwards restoring the mode. If the flag is not defined, the corresponding funtion in libm is used. I think this would handle most of the elementary function calls, but there may be a few outliers, e.g., ar_pow. In addition, crlibm does not provide direct implementations of atan2, erf, erfc, and lgamma, so these need further investigation. (Worst case, these just revert to libm with a disclaimer.)

The big advantage of this is to isolate SWIP from the differences (including bugs such as the one that initiated this thread) in the libm elementary functions between the various platforms, which should improve reliability and simplify testing. In addition, the results should be directly comparable with other language systems using crlibm. Most users are unlikely to notice any changes, but the affected function results are guaranteed precise and correctly follow the IEEE rounding mode semantics, contrary to my experience with existing math libraries. Iā€™ve done some superficial testing using the Python implementation and crlibm seems to fulfill its promise.

One example using SWISH (8.5.9):

?- Xl is roundtoward(exp(log(2)),to_negative), Xh is roundtoward(exp(log(2)),to_negative), W is Xh-Xl.
W = 0.0,
Xh = Xl, Xl = 2.0

On macOS (8.4.1):

?- Xl is roundtoward(exp(log(2)),to_negative), Xh is roundtoward(exp(log(2)),to_negative), W is Xh-Xl.
Xl = Xh, Xh = 1.9999999999999998,
W = 0.0.

Note that rounding up and rounding down the same expression yields an identical answer, which should never be the case for correct rounding. With Python crlibm:

>>> format(crlibm.exp_rd(crlibm.log_rd(2)),'.20g')
'1.999999999999999778'
>>> format(crlibm.exp_ru(crlibm.log_ru(2)),'.20g')
'2.0000000000000004441'
>>> format(crlibm.exp_ru(crlibm.log_ru(2))-crlibm.exp_rd(crlibm.log_rd(2)),'.20g')
'6.6613381477509392425e-16'

Outstanding issues:

  1. I assume that given the source here https://github.com/taschini/crlibm, supporting crlibm in SWIP in much the same way as is currently done for GMP is not a problem, i.e. ā€œjustā€ incremental work; is this a valid assumption?

  2. Licensing: Like GMP, crlibm has an LGPL license so I assume it would positioned the same way, i.e., as a build option (as reflected in the draft CALL_ELEM_FUNCTION1 macro above).

  3. Environment flag: any need? (Itā€™s unlikely the average user will see any difference.)

  4. Who?: I donā€™t feel comfortable tinkering with the SWIP build process but I can help with PRā€™s for the detailed changes to pl-arith.c and additional test cases as required.

Sorry, ignore that example in the last post - itā€™s not at all right.

Thanks for sorting this out! If it is true that for a function f the library provides 4 functions named consistently we can indeed solve this using some macros that should not impact pl-arith.c too much. This would provide a portable solution to consistent rounding.

Otherwise Iā€™m less happy. Only Macports (and probably Homebrew) provide binaries. Both Ubuntu and Fedora have no packages. Possibly because glibc seems to get things right and (therefore) the need for an alternative math library is small for Linux? Iā€™m not too keen on LGPL either. Iā€™d like to reduce dependency on LGPL components rather than increase it.

Provided we can avoid complicating the source of pl-arith.c too much Iā€™m happy to support crlibm. As long as we keep crlibm away from the SWI-Prolog source it should be just a test for the existence of a library and some #if HAVE_* statements

Hmmm. Seems the current source is at GitHub - taschini/crlibm: A mirror of the CRLibm project from INRIA Forge, which hasnā€™t been touched since 2016. The Macports version says it has no maintainer. It points at Correctly Rounded mathematical library which claims it is ā€œhistoryā€ and the project is moved to MetaLibm (MetaLibm: code generators for the libm and beyond)

Possibly there is something we can base ourselves on? Having the core (optionally) depending on an unmaintained library is not very attractive. If the changes are small clean Iā€™m still ok supporting it as an option.

I only have a fuzzy understanding what the implications are. Presumably you would have to build a library from source, but you need this to be separated from swipl-devel for licensing reasons.

Whatever SWISH is running on doesnā€™t get it consistently right:

X=0.2,Zl is roundtoward(exp(log(X)),to_negative), Zh is roundtoward(exp(log(X)),to_positive), W is Zh-Zl.
W = 2.7755575615628914e-17,
X = Zh, Zh = 0.2,
Zl = 0.19999999999999998

is OK, but not for:

X=2.0,Zl is roundtoward(exp(log(X)),to_negative), Zh is roundtoward(exp(log(X)),to_positive), W is Zh-Zl.
W = 0.0,
X = Zh, Zh = Zl, Zl = 2.0

I donā€™t have any reason to believe that any of the Linux versions is any better or worse than the others. Nobody (other than crlibm and possibly GNU libm) makes any guarantees.

Understood. My naive assumption was that it has the same problem as GMP and therefore could be treated the same way.

And like GMP, Iā€™m assuming any changes for crlibm would be isolated and conditional to such a flag. I see this as a pre-requisite, if only for the following:.

Also understood. Metalibm seems to be the (somewhat) active successor but it seems focused on code generation rather than a library. Not sure thatā€™s a useful path for our purposes.

And Iā€™m happy to run with it as an experiment. To get started I only need a Mac version with the library (and flag) defined to get my hands dirty. We can make a final decision when the ramifications are better understood. And Iā€™ll keep looking to see if I can find a cleaner alternative.

Your call on how you want to proceed.

The LGPL license merely requires that crlibm is compiled into a shared library/DLL to prevent it from affecting the SWI-Prolog BSD license. Still, anything distributed that contains an LGPL component must ensure the end user can replace the LGPL component. This is not always possible. Think of embedded devices, game consoles, etc.

The fact that it is lacking from the major Linux distros is an indication of lack of interest and commitment. Same for Macports claiming there is no maintainer.

Ubuntu 18.04 on Intel E5-2650, lib{c,m} 2.27. Same test on my dev machine (Ubuntu 21.10, AMD 3950X, lib{c,m} 2.34) gives:

?- X=2.0,Zl is roundtoward(exp(log(X)),to_negative), Zh is roundtoward(exp(log(X)),to_positive), W is Zh-Zl.
X = 2.0,
Zl = 1.9999999999999998,
Zh = 2.0000000000000004,
W = 6.661338147750939e-16.

AFAIK libm is part of glibc, no?

If you want to go ahead with this, I propose this route:

  • Build and install the library on your system.
  • Add a rule to the CMake config to detect the library. There are plenty of examples on how to find libraries.
  • Ensure to redefine sin(), etc. somewhere on top of pl-arith.c. It it is only a page or so, do so inline. If it is longer, add a file pl-crlibm.h that you conditionally include at the top of pl-arith.h.

If all that works we have to consider whether we want the default binaries and (LInux) packages to include this or we merely keep it as a build option for those who actually require this stuff. That mostly depends on the complications this poses to these processes.