Could SWI-Prolog have or need unums?

Again I don’t want to polute other threads. Interestingly
unums did never appear in the SWI-Prolog forum, although
Ulrich Neumerkel got interested in them in the past.

One could use a different mindset to realize CR_XXX operations
that don’t need any flags? I find an approach by the Go Programming
language which returns for its elementary operations an Accuracy:

x = 1000 (0x.fap+10, prec = 64, acc = Exact)
y = 2.718281828 (0x.adf85458248cd8p+2, prec = 53, acc = Exact)
z = 1002.718282 (0x.faadf854p+10, prec = 32, acc = Below)

https://pkg.go.dev/math/big#example-Float.Add

The above is a mixed operation, with two different argument precisions
and even a different result precision. But I guess it would also work
with setting all precisions to binary p=53. The big.Float datatype

uses a binary radix, i.e. r=2. With such an API there willl be no
need for flags and subsequently no problem with multi-threading.
You need to be able to stash the accuracy somewhere, it needs

at least 2 bits. They somehow store it in the big.Float type after
performing an operation. A little nasty, their big.Float is mutable it
seems, and not immutable as one would expect:

const (
	Below Accuracy = -1
	Exact Accuracy = 0
	Above Accuracy = +1
)

I don’t know what Below and Above means, towards and away
from zero, or towards and away minus infinity? And what their
rounding of the operations is? It seems the rounding mode can

be also specified in the big.Float, in the value holder for an
operation. And the Accuracy is an indicator that works relative
to the exact none rounded result. I guess one can reconstruct

a little interval from it rather straight forward?

Edit 24.10.2022
Oops, possibly not relevant to clpBNR, since it uses coarse
intervals when leaving rational numbers and switching to floats,
controlled by max_rational_size_action and max_rational_size:

lower_bound_val_(real,L,IL) :-             % rround outward (if float)
	Lv is L, Lv\= 1.0Inf,
	(rational(Lv) -> IL=Lv ; IL is nexttoward(Lv,-1.0Inf)).

upper_bound_val_(real,H,IH) :-             % round outward (if float)
	Hv is H, Hv\= -1.0Inf,
	(rational(Hv) -> IH=Hv ; IH is nexttoward(Hv,1.0Inf)).

Anyway, with some Accuracy bits, one would need to have a database
for irrational numbers such as π refered symbolically in the code by “pi”,
so that we know whether the floating point value is above or below

the real value, and we could construct a finer interval. Are finer intervals
worth the pain? If they would come practically for “free” in the corresponding
operations, there could be maybe some benefit. The Accuracy can possibly

be reduced to one bit only, by returning always the lower interval end in
the inexact case, reducing the effort for Accuracy representation to a single
bit, the ubit and we arrive at unums. But then we loose the information

what was the preferred floating point value according to the rounding mode.

Form my crude understanding of unums, they eliminate rounding errors when performing arithmetic on real numbers - essentially half the possible values are intervals between one precise value and the next.

It would seem to me that any computer language interested in mathematical correctness over reals would be interested in unums but the progress in implementing hardware, compiler, and library support for them seems to be very slow; correct me if I’m wrong.

Now there is a ~5 year old unum library on git hub (GitHub - LLNL/unum: Universal Number Library) implemented on top of (you guessed it) GMP, so I suppose there is a path for supporting them in SWIP (and others). However the README states:

Operations on unums through the library are several thousand times
slower than operations on IEEE standard types with floating-point
hardware.

So, IMO, it’s premature to talk about support for them in SWI-Prolog (and most other computer languages) without a pretty strong use case.

True. Actually any floating point constant in a clpBNR constraint expression is also “fuzzed”:

?- set_prolog_flag(clpBNR_verbose,true).
true.

?- {X==1.0}.
X::real(0.9999999999999999, 1.0000000000000002).

?- {X==pi}.
X::real(3.1415926535897927, 3.141592653589794).

and, as you noted, any conversion from a rational to a float is fuzzed. But all this would be avoided (and interval arithmetic primitives greatly simplified) if unums could be used instead of floats in interval bounds.

The unum interval π for would be:

(3.141592653589793, 3.1415926535897936)

But which one is the to_nearest value? In Go Programming
language, you would know which one the to_nearest is, since
Accuracy is 3-state and not 2-state as in unum.

It is fully contained in the clpBNR interval, which is a little
strange, was rather expecting that one of the clpBNR interval
borders equals an unum interval border?

?- X is nexttoward(3.141592653589793, -1E300).
X = 3.1415926535897927.

?- X is nexttoward(3.1415926535897936, 1E300).
X = 3.141592653589794.

Edit 24.10.2022
P.S.: How to obtain this interval. Step 1 go to microsoft
calculator accessory which has more digits. Press π.
It will show, possibly not correct in the last digits:

3.1415926535897932384626433832795

Now go to SWI-Prolog and do this, using 1E300 for infinity:

?- X is 3.1415926535897932384626433832795.
X = 3.141592653589793.

?- X is nexttoward(3.141592653589793, 1E300).
X = 3.1415926535897936.

That was not extremly rigorous, but you got the idea. Especially
we might need to call nexttoward(…, -1E300) sometimes,
this time I am using -1E300 for -infinity.

It would be useful to have a tool like this for the programmer’s learning. Then, having established the expected precision of a predicate, given an input of a predicate, to dispense with the precision calculations entirely and just do conventional IEEE arithmetic.

In most cases I suppose you could do (next_float() being like nexttoward):

F = ...., next_float(F, G), next_float(E,F), 
my_op(F, FOut), my_op(G, GOut), my_op(E, EOut). 

and if EOut <= FOut <= GOut you have an error bound, and you can compute bit precision.

Then make some library pred(+F, +Op, Precision).

Tools that do these things are good for studying an algorithm, with the programmer using the results to evaluate and improve the calculation until it’s good enough. pred/3 is used to help create production code, but the production code itself wouldn’t use pred/3 if the purpose is high-throughput math.

Yet another way to compute the finer interval is this in SWI-Prolog,
it seems through calling some SWI-Prolog ar_pi(), you also get
a finer interval, I find the following result:

?- L is roundtoward(pi,to_negative), H is roundtoward(pi,to_positive).
L = 3.141592653589793,
H = 3.1415926535897936.

Works also for this constant:

?- L is roundtoward(e,to_negative), H is roundtoward(e,to_positive).
L = 2.718281828459045,
H = 2.7182818284590455.

In unums you would only have one arithmetic call, and not two
arithmetic calls with two different rounding modes.

Edit 24.10.2022

For elementary operations in unums, if the unum ubit
is passed around and preserved, you could query the result.
Its more a property of the return value and not an additional

return value. But I don’t know how this would fit to everything
else. ECLiPSe Prolog could be a role model which has a
interval data type callled “bounded real” (breal), written

as L__H. But I don’t find an ECLiPSe Prolog way of querying
the “bounded real” for pi. For composite operations you
need to bring interval arithmetic on the table, and even

in the unums world, they also need a pair L and H, which
is call a “Valid” in the unums world:

Valids are described as a Type III Unum mode
that bounds results in a given range.
https://en.wikipedia.org/wiki/Unum_(number_format)#Valid

If you use unums this is true. Would they be interchangeable with other number systems in your code? I would not want to write a fancy algorithm based on unums, and have to recode it after for float.

Back to the non-unum idea I posted: It seems I’d want another parameter to really nail the concept: I don’t want the next float, but a float expressing my expected precision at point of call. Which is to say, I need the unum info after all, in order to derive correct unum info as output.

To implement this right, instead of next_float(F,G), we’d need next_float(F,G, Precision) defined so that G-F is non-zero, G-F rounded to Precision is non-zero, but to Precision-1 is zero.

So then if FP is a float pred, we have unumify(FP, UP) yielding a unum version of the predicate, with input and output being unums, that calls FP more than once to evaluate its precision.

If the predicate has a parameter, you can make the result
type dependent on the type of this parameter. Its the same
idea like here, this polymorphism:

abs : integer -> integer
abs : float -> float

Your function fun could be, using “breal” type from ECLiPSe Prolog:

fun : float -> float
fun : breal -> breal

I can use this polymorphism in ECLiPSe Prolog to nevertheless find
an interval for π. But it is not correctly rounded it seems, quite
coarse numerical balderdash:

/* ECLiPSe Prolog 7.1.12 */
?- X is 4*atan(1).
X = 3.1415926535897931

?- X is 4*atan(1.0__1.0).
X = 3.1415926535897927__3.1415926535897936

An interval that doesn’t have the to_nearest value, here
3.1415926535897931, as one of its borders is not a very
fine grained interval. For floating point values the multiplication

by 4 should not change the interval in terms of ULP. Since its
a power of 2. It will only add something to the exponent
of the floating point values, and not change the bits of

the mantissa of the involved floating point values.

Yes, there’s an additional outward rounding for ground expressions in clpBNR constraints (pi is just such an expression). I should clean that up.

I am currently implementing unums in pure Prolog. There is already some
progress, basic operations with HALF_EVEN and then an atan function
realization. It can be used to compute pi, it has pre-defined the Euler

formula 20*atan(1/7)+8*atan(3/79):

/* Compute pi with 53 Bit Precision*/
?- mp_pi(53, (M,E)), P is M*(2**E).
P = 3.141592653589793.

I have no clue how accurate the atan is. Unlike libBF it doesn’t use
Ziv iteration, but a dynamically estimated fixed number of members
of the Taylor expansion in a Horner schema implementation. Here

some differences between computing pi different ways:

/* Difference between pi in N Bit Precision to Machin-like Formula */
?- between(1010,1030,N), mp_eval(16*atan(1/5)-4*atan(1/239)-pi, N, X),
   write(N-X), nl, fail; true.
1010-(0, 0)
1011-(0, 0)
1012-(0, 0)
1013-(1, -1011)
1014-(0, 0)
1015-(-1, -1013)
1016-(0, 0)
1017-(0, 0)
1018-(0, 0)
1019-(0, 0)
1020-(-1, -1018)
1021-(1, -1019)
1022-(1, -1020)
1023-(-1, -1021)
1024-(0, 0)
1025-(1, -1023)
1026-(-1, -1024)
1027-(-1, -1025)
1028-(1, -1026)
1029-(-1, -1027)
1030-(0, 0)
true.

Edit 08.11.2022
Maybe you can use it for performance benchmarking the new libBF
binding for SWI-Prolog. Here are some timings I get for SWI-Prolog
and my Prolog system using the pure Prolog arbitrary floats code:

/* Jekejeke Prolog 1.5.5 Windows JDK 1.8 */
?- time((between(1,1000,_), mp_pi(53, _), fail; true)).
% Threads 375 ms, GC 2 ms, Up 401 ms (Current 11/08/22 11:48:47)
true.

/* SWI-Prolog 8.5.20 Windows */
?- time((between(1,1000,_), mp_pi(53, _), fail; true)).
% 3,358,000 inferences, 0.391 CPU in 0.391 seconds (100% CPU, 8596480 Lips)
true.

More or less same speed. But still much slower then any Math.atan().
Here is a snapshot of the source code. I called the basic arithmetic libbf.p
since the work is inspired by libBF and I represent the numbers with

a mantissa stripped by their trailing zeros:

libbf.p.log (6,4 KB)
mpatan.p.log (3,4 KB)
swistub.p.log (1,5 KB)

Maybe I will include the pure Prolog arbitrary floats in a next release as an
experimental library for my system. The error between computing pi different
ways is also seen in libBF online. Was toying around with this version:

The Scientific Web Calculator
http://numcalc.com/

But it has different patterns, so I guess my mp_pi/2 and or mp_atan/3 are not
correct. The internal basic arithmetic operations have a cr_ prefix and are
correctly rounded. On the other hand the prefix mp_ means only multi

precission. It doesn’t mean ultra accuracy at the moment. For example I find:

Now you can try libBF online, the small errors sit elsewhere:

> \p 1024
> 16*atan(1/5)-4*atan(1/239)-PI
2.2...e-308
> \p 1025
> 16*atan(1/5)-4*atan(1/239)-PI
0.0

Edit 08.11.2022
More about speed, comparing pure Prolog arbitrary float 53-bit precision:

To Java Decimal128 with pure Prolog atan to Math.atan(). There
are like order of magnitudes between each. The comparison with
Decimal128 is a little unfair, since it generates a more bits:

?- time((between(1,1000,_), pi128(_), fail; true)).
% Threads 62 ms, GC 1 ms, Up 73 ms (Current 11/08/22 13:39:33)
true.

?- time((between(1,1000,_), _ is 20*atan(1/7)+8*atan(3/79), fail; true)).
% Threads 0 ms, GC 0 ms, Up 4 ms (Current 11/08/22 13:41:11)
true.