The Debian integration tests revealed an interesting bug. It showed up on i386 platform using Debian unstable, gcc 12.2. The bottom line is that rationalize/1 failed to do its job. Much simplified this performs
int num, den; double target; ... while((double)num/(double)den != target) <update num and den>
Trying this with target
0.1, the second iteration nicely comes with
num = 1,
den = 10, but the while loop just continues (until it decides the numbers got too big, causing it to use rational/1).
1/10 == 0.1 fails using gcc-12 on this platform. Why? Well, we can “fix” this using
volatile double f = (double)num/(double)den; if ( f == target ) ...
The volatile keyword tells the compiler to put
f in memory and thus forces the intermediate result into a proper 64 bit double. In the original code the
(double)num/(double)den stays in a 80 bit extended float register and the target is promoted to this type before comparing, so we compare the 80 bits approximation of 0.1 to the 64 bit and fail.
In fact, using enough optimization gcc sees there is no point in the
volatile declaration and it fails again. For this reason I now put the double in a union with a 64-bit int and to equality testing by comparing the integers. That forces the compiler to put the intermediate in memory.
This works. Now two questions remains
- Is what gcc does actually correct? Maybe that is not the right question as the gcc notes on float processing acknowledge the default prefers performance over strict compliance with the C standard. So far, no previous versions of gcc did this though (AFAIK).
- Is there a more elegant way to get the result we want? One is probably to use gcc flags to tweak its float handling. Downside is that other compilers may do the same and these are the only two places where we want to compare floats at 64 bits.