Does SWI-Prolog use StrictMath or Math?

I find that SWI-Prolog 8.5.4 gives on Windows 10:

?- X is cos(2.3).
X = -0.6662760212798241.

Whereas TauProlog 0.3.2 (beta) gives:

?- X is cos(2.3).
X = -0.666276021279824

Is there maybe a flag where the strictness can be queried or controlled?

write/1 uses the dtoa.c library to convert the C double into the shortest sequence of digits that produces the same binary double representation when read. It is right that the TauProlog one does not:

103 ?- X is cos(2.3), X == -0.6662760212798241.
X = -0.6662760212798241.

104 ?- X is cos(2.3), X == -0.6662760212798240.
false.

You get a defined number of digits using format/2,3.

I don’t see any issue with a write function. Its just that cos() returns a different value.
You can check the bit pattern of the float:

/* Math, -0.6662760212798241 */
cos(2.3) = 0xbfe5522217302fe1

Versus:

/* StrictMath, -0.666276021279824 */
cos(2.3) = 0xbfe5522217302fe0

I can produce the different two return values in Java JDK 16 Windows 10, by either
using the Java class StrictMath or Math. Now I found that such a discrepancy also exists
across Prolog systems. Math is characterized as:

Unlike some of the numeric methods of class StrictMath , all implementations of the equivalent functions of class Math are not defined to return the bit-for-bit same results. This relaxation permits better-performing implementations where strict reproducibility is not required.
https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/lang/Math.html

There are more Prolog systems which side with Math and not with StrictMath. For
example Scryer Prolog and ECLiPSe Prolog. But then some side with StrictMath like
Tau Prolog, Dogelog and GNU Prolog.

I see. Java StrictMath seems to use some C library that trades performance for strict reproducibility on any platform. If I understand this correctly, the results are not necessarily better, but the rounding is platform independent when using StrictMath? Possibly JavaScript uses this too, considering the result for Tau and Dodgelog? GNU Prolog is rather weird though. GNU Prolog is C based.

I guess we could use the same library (license permitting) and possibly have a Prolog flag to switch between native and this library. I have little clue how useful this is. If someone cares and comes with a convincing argument why it is worth the additional code and complexity I’m happy to handle a PR.

Especially if GMP is compatible with StrictMath a more advanced solution would be to support GMP floats and allow for higher precision. And then there are several other initiatives to improve/change floats :slight_smile: Life is too short to understand floats :frowning:

I don’t know what Dodgelog should be, I only know about Dogelog.
But on 64-bit Intel, the Math cos() is simply these op-codes:

0xD9 0xFF
coder64 edition | X86 Opcode and Instruction Reference 1.12

That is what JDK 16 in Math mode seems to use. According to
JDK-8143353 the performance gain is ~4.25x with both sin and cos.

But I did not yet do some performance measures. Also I did not
verify first hand by myself that this opcode really gives the different value.

That’s my understanding as well. But according to https://www.mathsisfun.com/calculator-precision.html,

cos(2.3) = -0.66627602127982419331788057116602
                              ^

neither is correct to the nearest “ulp” (unit of least precision) expressed in printed decimal form. On the other hand, all 3 values fit within 2 ulp’s, so it’s not clear it makes much difference unless you’re comparing results at the bit level.

From what I’ve seen, C math libraries seem to be quite bad at defining the error properties of their functions. The best I’ve seen is the GNU math library:
Errors in Math Functions (The GNU C Library); note that, in general, error values are architecture dependent. So if it’s worth looking at support for other math libraries, maybe GNU math is decent candidate. I have no idea how it stacks up on other criteria.

Bottom line: when it comes to using floats, “caveat emptor”, but that should be old news to most. (If not, read Representation and printing of floating point numbers .)

2 Likes

This would give yet another result:

/* SWI-Prolog */
?- X = -0.6662760212798241933, Y is X-cos(2.3).
X = -0.6662760212798242,
Y = -1.1102230246251565e-16.

Edit 21.03.2022:
The reason that this gives another result, can be found in different ways to approximate
2.3 itself. If you have more precision for 2.3 you also shift the result. I get:

?- X is mp(decimal(2.3), 50).
X = 0d2.2999999999999998223643160599749535322189331054688.
?- X is mp(23/10, 50).
X = 0d2.3.

And therefore:

?- X is mp(cos(decimal(2.3)), 50).
X = -0d0.66627602127982406085402518851282458075545863946058.
?- X is mp(cos(23/10), 50).
X = -0d0.66627602127982419331788057116601723016327537100379.

This was computed with Java BigDecimal and some Prolog code, only simple
algorithms though, so the one or two digits at the end of a result might not be

correct. The rest seems to be often good…

Things become even more fun when you combine floating point functions, for example sqrt(1-sin(X)**2) … there are ways of computing these that give better “ulp” precision than just naively using that formula.

(I found out about this when I was in charge of C runtime library, which used new and highly optimized assembler code from IBM Research … on our team was a PhD numerical analyst, who kept on getting side-tracked with issues of “ulp” and I had to constantly remind him: "Yuri: first we need to ship this thing; then we can improve its “ulp"s.”)

2 Likes

So when we use the nearest double to 2.3, the exact value of cos is inbetween the two,
but it is nearer to -0.6662760212798241. One can use some multi-precision MatLab
arithmetic or these queries here where mp/2 is for multi-precision,

that this value is nearer and that cos is inbetween:

?- X is mp(cos(2.3)- -0.6662760212798241, 50).
X = 0d4.355007402796814766458012027899645E-17.

?- X is mp(cos(2.3)- -0.666276021279824, 50).
X = -0d6.747222843454750637778304653008558E-17.

In the above I didn’t use decimal/1 conversion evaluable function anymore, mp/2
does this automatically when it sees a double, i.e. a number that is not a decimal
that starts with the special number prefix 0d.

Ah, so it appears that the StrictMath library gave the correct answer to the wrong question, i.e., it couldn’t produce the correct answer to X is cos(2.3) because 2.3 isn’t a representable input value.

Floats are not a very good approximation to reals (cos is a continuous function over reals) except they seem to work well enough most of the time.

Inside Float Arithmetic/Trigonometry cos(x) is defined as taking a Float for x, and not
something else. I didn’t test something else, like interval arithmetic or arbitrary
precision floats. Thats kind of out of scope. The question was only:

?- X is cos(2.3).
X = -0.6662760212798241.

Versus:

?- X is cos(2.3).
X = -0.666276021279824

But Float Arithmetic/Trigonometry is usually defined in reference to the mathematical
domain of real numbers, which is yet something else. For example in interval arithmetic
you might get some interval [L..H] this still doesn’t tell you what Float Arithmetic/Trigonometry

should give. Float Arithmetic/Trigonometry must chose a machine X' which is close to the
mathematical X. It must make a choice from the interval [L...H]. Such choices must
be first made for the arguments of a function from parsing a number literal, in case

the problem is given a such, and then for the result of a function based on the
choice that were made for the arguments of the said function.

Edit 22.03.2022:
You can play around with ECLiPSe Prolog to see the difference:

[eclipse 2]: X is cos(2.3__2.3).
X = -0.66627602127982422__-0.666276021279824

[eclipse 3]: X is cos(2.3).
X = -0.6662760212798241

Strangely their interval arithmetic gives a too large interval, or its a fake center
interval, not some high quality interval arithmetic. A smaller interval would be
-0.6662760212798241__-0.666276021279824, the lower end-point is wrong,

its indeed an end-point farther down:

[eclipse 4]: -0.66627602127982422 == -0.6662760212798241.
No (0.00s cpu)

Or it could make the choice based on IEEE rounding mode. Sadly, the C math libraries I’ve seen don’t even claim to obey the rounding mode for many of the functions, in addition to lack of any guarantees of precision and monotonicity. So no small wonder that Eclipse doesn’t get it quite right some of the time.

But I think the original question was focused on whether SWIP should be using a different math library, and if so, which one. It’s not clear to me that there’s any obvious reason to change what’s currently being used. If anything, I’d give a small nod to GNU because it at least tries to address these issues, but I’ve no other experience to back this up.

After a quick git clone and some git grep, all I can find is that they call the C library function cos(). Haven’t searched whether they somehow modify the float environment, neither how they print float answers.

Well then you have misread the question and it is also a kind of far
fetched assumption, since I am not an Orthodox Priest. I can 100%
assure you that I am pro diversity. The question was only:

For example in Java I can kind of “query” what is used.
If I find some code that reads:

x = Math.cos(2.3);

I know it uses Math. If I find some code that reads:

x = StrictMath.cos(2.3);

Then I know it uses StrictMath. I can also “control” it, by using either
Java class and its static members. Why consider such a flag, maybe a
novel one? Well it could help in writing test cases as an example use case.

Somehow I have this test case cos(2.3) in my own test suite, and stepped
over the problem this way. Have to check whether I copied the cos(2.3)
test case from ISO core standard document, some example section.

Maybe some wise guy already put it there, because it is a notorious example?

Edit 22.03.2022:
Ossification and Orthodoxy is a dead end, I already wrote about it:

ISO Core Standard doesn’t work, Living Standard might work
https://medium.com/@janburse_2989/iso-core-standard-doesnt-work-living-standard-might-work-762a06f6fb01

Sorry for the confusion. I meant the GNU C math library, not the C math library that GNU Prolog uses.

But it appears I misread the intent of the thread, so I’ll retreat to my corner.

I wonder whether the existing flag iso could be used for
such purpose.

iso (bool, changeable)
https://www.swi-prolog.org/pldoc/man?section=flags

Currently it doesn’t have some effect on arithmetic.
What could go wrong?

It would then also have the semantic of some StrictMath,
not in the Java sense, but in the ISO core standard

sense. Or maybe there are other flags?

As far as I recall, the ISO Prolog standard says very little about floats. Merely that they (may?) exist, some functions that should work on them and NaN nor Inf exists and therefore a computation that would produce these must raise an exception. Very old systems used a 32-bit float and removed some bits to accommodate the Prolog type tags :slight_smile:

I think we should leave floats alone and get on with logic :slight_smile:

1 Like

I didn’t ask something about floats. I asked about arithmetic:

But I should rephrase this to arithmetic AND trigonometry functions AND
exponential functions. The scope of the flag would be not only (-)/2, (+)/3,
(*)/3, etc… but also round/2, cos/2, exp/2, etc… The ISO/IEC 13211-1 chooses

LIA for the floats of a Prolog system, and defines things like

/* 9.1.6.1 Floating point to integer rounding functions */
round(x) := floor(x + 1/2)

Float numbers and their arithmetic and their trigonometry functions and their
exponential functions occupy quite a large part of the ISO core standard for Prolog.
LIA means the above definition of round would be required if the Prolog

system uses 32-bit floats or if the Prolog system uses 64-bit floats.
The SWI-Prolog built-in evaluable function round/1 currently behaves differently.
If the representation is variable doesn’t mean the functionality is completely

variable. But the representation is indeed variable:

Unbenannt

Edit 24.03.2022:
Similarly StrictMath and Math in Java have both 32-bit (float type in Java) and 64-bit
(double type in Java) routines. This is done via overloading. I can ask in Java:

x = Math.round(2.3f)

And then round(float) and not round(double) is invoked, since the literal
2.3f is a 32-bit literal and type inference will resolved the overloading. I think
cos() has only cos(double) and not cos(float) variant in standard Java,

but cos(float) is also a generally defined function in LIA as adopted by the
ISO core standard, which might have CPU, FPU, GPU, etc… support,
through instructions and/or routines.

But the scope of StrictMath versus Math is usually only rounding
functions, trigonometry functions exponential functions i.e. round/2,
cos/2, exp/2, etc…

For control of arithmetic Java has the strictfp modifier, which would
affect (-)/2, (+)/3, (*)/3, etc… I guess, and might affect whether
64-bit can be temporarily turned in 80-bit floats:

Extended precision
Extended precision formats support a basic format by minimizing
roundoff and overflow errors in intermediate values of expressions
on the base format
https://en.wikipedia.org/wiki/Extended_precision

Edit 24.03.2022:
But now I am confused, I recently made the step to JDK 16. And now
I read “As of Java 17, using this keyword has no effect.” Does extended
precision not anymore exist, is basically dead?

It seems to be dead, since taking a time machine, back then, 80-bit
was cheap (because there was FPU) and 64-bit was expensive, and
now its not anymore the case (because on chip). At least

when I read here JEP 306 it seems so. For the chill effect:

The Sixth Sense (1999) - “I see dead people.”
https://www.youtube.com/watch?v=ZSNyiSetZ8Y

1 Like