Arithmetic function 'cputime' resolution

On MacOS, the resolution of function cputime appears to 1 microsecond. Is this generally guaranteed or is it platform dependent?

The arithmetic function cputime is new for me. Thanks. Once I wrote my own cputime below, though I never met a chance to measure for a serious case. It displayed long digits, which I thought the resolution is fine enough (Intel macOS Monterey).

:- meta_predicate cputime(?, 0, ?).
cputime(N, G, T):- writeln("Running pac runtime library cputime/3...  "),
				   cputime(N, G, 0.00, T).
cputime(0, _G,  T, T).
cputime(N, G, T, T0):-  N0 is  N - 1,
						cputime_for_step(G, S),
						T1 is T + S,
						cputime(N0, G, T1, T0).
cputime_for_step(G, T):-
	statistics(cputime, T0),
	statistics(cputime, T1),
	T is T1-T0.

Another simple version cputime.

cputime:- statistics(cputime, T), b_setval(cputime, T).
cputime(T):- statistics(cputime, Stop), b_getval(cputime, Start), T is Stop - Start.

Well, one needs the OS to get the CPU time :slight_smile: I don’t know. Roughly there are two interfaces for most OSes, an old one that gets these statistics in terms of the scheduler frequency and a new one that typically allows querying at nanosecond resolution. If possible we use the new interface :slight_smile:

Its an old thing that came with C-Prolog compatibility :slight_smile: I’d use statistics/2, which is more widely supported. Unfortunately the details differ. SWI-Prolog reports in seconds as a float, many systems report as integer in milliseconds.

So cputime isn’t even consistent across Prolog systems, let alone OS’s; good to know.

statistics/2 similarly seems to provide microsecond resolution; on MacOS:

?- statistics(cputime,T0),statistics(cputime,T1).
T0 = 27.036738,
T1 = 27.036741.

Is this guaranteed on all supported platforms?

Should I assume the nanosecond time isn’t directly accessible and may not be supported on all platforms?

I don’t know. The border of “supported” is not that clear. SWI-Prolog has binary releases for Ubuntu, MacOS and Windows, so if anything goes wrong on any of these it is typically fixed before releasing. On other platforms we depend on people sending bug reports or pull requests. SWI-Prolog is supposed to work on anything that provides a C compiler that conforms to the C11 standard and has at least a basic POSIX compliant OS interface.

The clock API (except for Windows) relies on clock_gettime() on most systems (with fallback to older standards). The struct timespec provides nanoseconds which are translated to a C double. Possibly we should expose clock_getres(), which provides the resolution provided by the OS/hardware?

Why do you care about this level of detail?

The code is here: function CpuTime in pl-os.c

The timespec structure gives nanosecond time stamps, but actual accuracy would depend on the underlying CPU.

On my Linux system (Chromebook, Debian), all three of these are defined:

If you can’t figure out what’s defined by looking at the cmake files (and I couldn’t), just add this to pl-os.c and rebuild:

#warning yes: HAVE_CLOCK_GETTIME
#if defined(HAVE_TIMES)
#warning yes: HAVE_TIMES

I’ve been wondering whether the system clock can be used to generate unique timestamps, i.e., two successive calls to cputime (or some available alternative) would generate two different values on all platforms of “interest” (subjective, I know). I really don’t care about the actual values or what they represent; just that they’re monotonically increasing.

The answer seems to be no.

It’s only a few lines of code, so it seems useful … the result I get is 1nsec on both an AMD CPU and an Intel CPU.

I wouldn’t recommend doing anything based on my input. cputime was an option I was considering and I was just curious. Using a random_float is more than good enough for what I need.

Too late. ENHANCED: added cputime_resolution to is/2 by kamahen · Pull Request #1003 · SWI-Prolog/swipl-devel · GitHub
Unless @jan decides it’s a bad idea.
Anyway, Python has it as part of its library: time.clock_getres(clk_id), so that’s another vote for exposing clock_getres().

Use statistics(inferences, Count). See statistics/2 (doesn’t show in the manual right now due to a conflict about section identifiers in the manual. Fixed the sources).

It is surely a bad idea to add it as an arithmetic function. cputime is already something that just got there for compatibility long time ago. I think the most Prolog natural way would be to make it a Prolog flag, possibly an additional key to statistics/2. I don’t see enough reason to add it anyway. Surely we would need an alternative implementation for Windows and quite likely this will break several other OSes. I don’t think it is important enough to go through the portability trouble and actually I think it isn’t worth the lines of code to implement and document it.

1 Like

Perfect. I just didn’t think of using statistics for this purpose.

No clue why you need this. It might be relevant to point at the notion of a generation counter that is associated with the (predicate) database. That allows querying last_modified_generation on predicates and modules using predicate_property/2 and module_property/2. The global current generation is not accessible (but can be added if there is a good use case).

Motivated by this thread: Different results when using library(simplex) or R-package lpSolve / lpSolveAPI - #6 by ridgeworks
I’ve been looking at how to use library(simplex) in a CLP framework, the main issue being there’s an API mismatch. One of the main issues is that variables within a CLP system are represented by attributed variables while variables in simplex constraints are represented by atoms or compound terms, w.g., x or x(0). In building a CLP friendly wrapper for library(simplex), I wanted a light weight mechanism to associate a name with a CLP variable for relatively short period of time.

The first prototype used:

simplex_var_(V,var(Vstr)) :-  term_string(V,Vstr).

This looks up a variable name usable by simplex given a constrained variable V. Vstr is the “address” of the variable V converted to a string so it will be unique for any variable of interest. But it’s potentially not safe due to variable relocation during garbage collection. And it’s unlikely to be the most performant solution based on a simple benchmark.

A better approach is to use an attribute to store a unique ID of some kind. Using the cputime (a monotonically increasing value) was the first stab but appears to have platform dependancies and will probably start to fail at microsecond resolution (two successive queries may yield the same value). Using the inference counter neatly solves this problem. So the code becomes:

simplex_var_(V,SV) :- 
	 -> true
	 ;  statistics(inferences,Vname), SV = var(Vname), put_attr(V,clpBNR_toolkit,SV)
attr_unify_hook(var(_),_).     % unification always does nothing and succeeds

This works quite nicely and is 10 times faster than a term_string based implementation.

Using the new clpBNR wrapper module for Example 2 from library(simplex):

?- X1::real(0,1), X2::real(0,2), lin_maximize(7*X1+4*X2,{6*X1+4*X2=<8},Value).
X1 = 1,
X2 = 1r2,
Value = 9.

?- X1::integer(0,1), X2::integer(0,2), lin_maximize(7*X1+4*X2,{6*X1+4*X2=<8},Value).
X1 = 0,
X2 = 2,
Value = 8.

The new variable name attribute is removed before the completion of lin_minimize/3 so there’s no lingering effects.

gensym/2 ?
(whose document also says: uuid/1, term_hash/2, variant_sha1/2 may be used to generate various unique or content-based identifiers safely.)

Yes gensym or uuid could be used as well although they aren’t as time efficient as statistics(inferences,ID). They also result in creation of new (very temporary) atoms which can be avoided using statistics .