Why "steadfastness"?

“Steadfastness” of a predicate is a common part of the (ISO) specification. It refers to the fact the predicate, in presence of an “output argument” that is instantiated, behaves as if it were not, with unification of a separately computed “output argument” with the passed argument at the very end.

However, it seems that predicates could profit from taking a peek at the output argument to be able to “fail fast” instead of pressing on to the end and finding out that, when all is said and done, unification will fail. This just drives up resource usage.

I can think of these reasons:

  • Easy to standardize on. It is very clear what “steadfastness” implies, but “failfastness” is much more arbitrary and may lead to unwelcome disucssions.
  • It’s easy to implement.
  • The tradeoff between having to perform additional predicate entry checks in all cases vs. having the ability to fail fast occasionally is not so great (and possibly worse on machines from the mid-90s with 16 MiB RAM and 50 MHz CPU). In any case, it depends on the problem.

But then again, steadfastness is easily added to failfast predicates by using fresh variables for the “output arguments” on call, cutting, then unifying with whatever was expected. Going from steadfast predicates to failfast ones, is, however, impossible.

1 Like

That’s a very bold claim :slight_smile:

It really depends how far you want to go with it… Couple of examples:

?- foo is 1 + 2.

Should is/2 immediately notice foo is not a variable? not a number? Still doable.

?- sort([a,b,c], [x|_]).

Should sort/2 immediately notice there is no x in the first argument and fail? Or should it compare the heads of the two lists, notice that x is definitely bigger than a and fail? At some point sorting and attempting to unify starts to look like the pragmatic way out.

It might be easy to implement tricks for some predicates. But is it doable in the general case? I suspect this has something to do with the decidability of the problem.

EDIT: not sure about the “decidability”. I lack the formal education to know such computer sciency stuff. If someone could clear this up for me I would be thankful. The point I was trying to make: in the general case you can’t know the result of a computation without doing the computation. I might be wrong but the idea by @dtonhofer implies that there would be an oracle to decide early whether the provided output argument cannot be the answer?

PS: Are you just looking for constraint logic programming?

Seems a bit weird statement to me. Built-in predicates are typically steadfast. Even the famous L = [A,B,C], sort(L, [1,2,3]) is steadfast IMO (although different opinions are possible). var(X), X = 1 is not a good example either because the mode is @ rather than -. X, X=true is wrong as well as call/1 has mode (+) (actually (:slight_smile: as it is module sensitive), i.e. its argument must be a callable term. Similar your union: you violate the mode. That creates an illegal goal. As with most programming languages, that may be reported as an error, but may typically also result in undefined/implementation defined behavior.

The “fail fast” idea and more generally performance is one of the motivating ideas behind steadfastness: the implementation has access to the expected result and can use that to avoid creating a data structure (saves memory), speedup the processing or fail early. Some of this is used internally. E.g. clause(+,?) does the head unification directly, reusing the provided term and failing as soon as possible. Also term_attvars/2 explicitly documents that term_attvars(Term, []) efficiently detects a term is free of attributed variables.

It is not done on mainy places. Possibly findall(X,G,) and findall(X,G,) could make sense as well. Often though, it mainly complicates the code, making it more error prone. I guess it makes more sense to rewrite findall(X, G, ) to \+ G or have a linter suggesting such to the user. If I’d do something with foo is ... it would be for the compiler to warn with goal always fails. This is already in part implemented:

t :-
    nonvar(X),
    writeln(X).

Raises

Warning: /home/janw/src/swipl-devel/linux/t.pl:1:
Warning:    Test is always false: nonvar(X)

Currently such warnings are limited to goals that are compiled to VM code where we have to compile and handle the argument anyway. nonvar/1 compiles to I_NONVAR N, where N is the variable offset. If we are there anyway we just as well test that the argument is not a variable to begin with (and thus we always succeed) or the variable is fresh or singleton (and thus we always fail).

is/2 is a funny case for which we already handle several special cases. Notable we are interested in the case where the result variable is fresh as that means we can just dump the result in the output variable instead of unifying. Typical patterns handled are
A is B + <int> where A is fresh. This compiles to a single VM instruction that opportunistically hopes B is a small integer, adds the contents and dumps the result in A. If B is something else it will do the general purpose evaluation.

2 Likes

Thanks.

Quoting this one so I can bookmark it.

1 Like