Errors considered harmful

As I understand this, you’re agreeing with me. Am I wrong?

Is this what you meant to say? If “not spot on”, exactly what was the issue?

This strikes me as a very effective approach for throw’s in libraries. In fact, you wouldn’t have to change the existing throw/1, just rewrite it to to something with the same semantics as your suggestion. Unfortunately I don’t think it helps with built-ins and uncaught exceptions in libraries.

And I’m not sure what the containment semantics are, but maybe that’s a solvable problem (or a non-issue).

Aside: Here’s a possible logical description of eq_contains(L,T): succeeds if L is a list containing T, using == for equivalence, otherwise it fails.

This could be implemented as:

eq_contains(0, _) :-  !, fail.       % any L that unifies with 0 is a failure
eq_contains([X|_], Y) :- X == Y, !.  % == equivalence, succeed
eq_contains([_|X], Y) :-             % non-equivalence, try the list tail
   eq_contains(X, Y).

A simple logical description leads to a simple implementation. Exceptions complicate matters for both the caller and the callee.

Truth be told I have not been following most of the post of late as I have been focused on making my model in Prolog. Give me some time to read and digest this entire topic and then I can reply.

However, based on the gist of the few fragments I read here and there, I am thinking my answer will not be agreeing with you.

I don’t know exactly you are asking by that.

I know the following is a bit convoluted but at present don’t want to give out the name of the real world entity.

For the specific part of the model I was having problems withI knew the answer was from a set of 3 possible answers. I also knew that the answers would repeat in succession, e.g. a,b,c,a,b,c,a,b … . I also knew there existed a relation between some constant values and the value for the hidden variable. Once I figured out how the constant values related to the value for the hidden variable I just picked a set of constant values and related the value of the hidden variable into a fact. Then when I needed the value of the hidden variable I just looked up value for the hidden variable from the fact and then used that to calculate for the correct value of the hidden variable. The value changes every day so by knowing the correct value on a certain day and offset from that day the value of the hidden variable could be correctly calculated.

If this was done using exceptions it would have just made the code that much harder to understand and fix. By just giving an incorrect value the code was much easier to fix. In other words the problem was more like setting up tables for an SQL database and the query was not correct; while the answers were in the correct sequence, they were not aligned to the actual value for that day and needed an offset to shift the rotation.

If you have ever set the timing on a car engine between the crank shaft and cam shafts you know there is a marking on the gears that identify how to set up the relation so that they result is the valves open and close in relation to the position of the crankshaft. If you did not know about the markings you could infer it by trial and error of marking the gears until you figured it out. With the markings set at the factory on the gears it is just much easier.

Many of these flags cause problems. double_quotes is a popular one. unknown and autoload are others and surely there are many more. I’m afraid I don’t really see the merits of turning type and domain errors (let alone others) into simple failure. I mostly consider it a step in the wrong direction in general. As there are feasible localized solutions I think that is the way to go.

Good suggestion. I think they’re equivalent but [] is more descriptive, i.e., the empty list does not contain Y.

So I’d first define what it means. Something like eq_intersection(S1,S2,SI) succeeds if SI is the set intersection between S1 and S2, otherwise it fails. This predicate is deterministic.

After a little thought (and less testing):

eq_intersection([],[],[]) :- !.
eq_intersection(S1,S2,SI) :- var(S2), !,   % for symmetry
	 eq_intersection(S2,S1,SI).
eq_intersection([],_,[]) :-!.              % intersection with empty is empty 
eq_intersection([X|Xs],S2,[X|Ss]) :-       % include in set intersection
	eq_contains(S2,X), !,
	eq_intersection(Xs,S2,Ss).
eq_intersection([_|Xs],S2,Ss) :-           % not in set intersection
	eq_intersection(Xs,S2,Ss).

I’ve chosen to treat variables as legal argument values:

?- eq_intersection(X,[1,2,3],[1,3]).
X = [1, 3].

?- eq_intersection(S1,S2,SI).
S1 = S2, S2 = SI, SI =  ([]).

?- eq_intersection(X,[1,2,3],SI).
X = SI, SI =  ([]).

but that’s just a design choice. Since it’s deterministic, there’s no generation of all the possible sets that satisfy the specification, just that there’s at least one solution.

I respectfully and totally disagree with this, but I’ve had my day in court and the judge has ruled against me. (IMO, ISO got this wrong 25 years ago.) Not exactly sure why the merits of my case failed the test, but assume it’s because:

By this I assume goal expansion. If so I’m not quite sure how to define “localized solutions”. Do you mean global in scope but only if the global expansion code is loaded? Or is there someway of limiting the effect of expansion (just my ignorance)?

So you didn’t specify that. But here’s the thing - If I don’t like what library(lists) does, I don’t have to use it. That’s my choice. When it comes to the builtins, I don’t feel I have the same choice, short of forking SWIP and digging into the C code. Or goal expansion which will probably require me to at least read the C code.

But it is what it is.

SQL also gets it wrong with NULLs.
Consider a table with these values:

   A     B
   0     1
   1     null

then sum(A) + sum(B) = 2 but sum(A+B) = 1.

(And I have more examples of other problems with SQL’s null.)

For those who want to try this out (I used sqlite):

create table t(a int, b int);
insert into t(a,b) values(0,1);
insert into t(a,b) values(1,null);
select sum(a)+sum(b), sum(a+b) from t;

Goal/term expansion can be as local as you want. By default it applies to the module it is defined in. Defined in user it holds for all user code. It can use prolog_load_context/2 to notably find out the module that is being loaded and use whatever it wants to define whether the expansion should apply here or not. You can also define it in system and possibly mess up the libraries :slight_smile:

Most predicates have the type and/or domain to which they apply described in the documentation. Anything outside this may change over time. Roughly the policy is that exceptions should be raised whenever reasonably feasible for calls that are outside of the defined types/domains/instantiation. Exceptions should also be avoidable using a simple test. ISO number_codes/2 raising a syntax error is IMO a bad decision as there no simple way avoid this exception while there are plenty practical (and time critical) examples where you want to act on some string not representing a number.

Other than a few exceptions a simple type or mode (var/nonvar) check avoids almost all exceptions. Note that just about anywhere you can have resource exceptions, timeouts, aborts, etc.

That sounds reasonable. I’d probably start by implementing it as an include file, so local to the module that included it.

I was never talking about the latter - these are legitimate errors, i.e., the program should not or cannot continue. I was concerned about errors that are real failure cases, i.e., there are no circumstances, as defined by the program under which they would succeed. (Instantiation “errors” are the grey area; they could possibly be interpreted as constraints in some future LP language - not Prolog.)

Well, resource errors could also be the result of an implementation technique that uses a lot or unbounded resources, while other ways to implement the same thing could be successful. I think there can be more surprises. Consider

relative_file_name(Name) :- \+ atom_concat(/, _, Name).

After silencing type errors, relative_file_name(f(x)) is true … More generally, at least negation is rather problematic under this model.

But isn’t Prolog negation problematic in any case? Why would I use it here. Wouldn’t I be better off with:

absolute_file_name(Name) :-  atom_concat(/, _, Name).

And then dealing with the failure of absolute_file_name? Of course not being an absolute file name is no guarantee that it’s a relative file name, just that it could be.

And you’re right. Perhaps they should just fail back to a choicepoint so that an alternative could be explored, if that could be done safely. But, believe it or not, I think these should be handled via catch/3. My position is not based on an all or nothing proposition.

I understand I’ve lost my case, but for the record, here’s an excerpt from a document I just found written by Richard O’Keefe in 1984 (https://www.metalevel.at/prolog/okeefe.txt) as input to the standards discussion.

@Comment<
File : PLSTD.MSS
Author : R.A.O’Keefe
Updated: 23 July 1984
Purpose: SCRIBE manuscript for the draft Prolog standard

The following excerpt (in it’s entirety) discusses type errors (apologies for the embedded notation syntax, but I didn’t want to change anything):

@SubSection[Type Failure]

The third kind of error is when a predicate is given
arguments of the wrong type.  An example of this would be
@w["X is fred"].

If you regard a built in predicate as an
implicit form of a possibly infinite table, it is clear
that @b<logically> the response to such questions should
be simple failure.  If for example, we ask @w["?- plus(X,a,Y)"]
it is clear that there are no substitutions for X or Y which
could ever make this true, and the appropriate response is
simply to fail.

However, for debugging it may be more
helpful to forget about Prolog's "totally defined semantics"
and to produce an error report.  The specifications below
use @b<type_fail>(ArgNo,ExpectedType) to indicate this.
The exact form of the error message is not specified, as a
Prolog program should have no way of getting its hands on
such a message, but it might look like
@Begin[Verbatim]
	! Type Error: argument 2 of plus/3 is not an integer:
	! plus(_273, a, _275)
@End[Verbatim]

An implementation should be able to provide @i<both> forms of
behaviour at the programmer's option.  This could be a global
flag set by e.g. flag(type_fail,_,fail) or flag(type_fail,_,error).
Or if there is some way of providing handlers for errors, it could
be by letting the programmer supply a handler for type errors
which just fails.  There should be a distinction between
continuable errors, where there is an obvious way of proceeding,
and non-continuable errors, where either success or failure would
yield the wrong answers in some model.  Type failures are
continuable.

So “this 1000%”. IMO it’s too bad this didn’t make it into the standard, but maybe he changed his mind. (The rest of the (somewhat long) document is pretty interesting too if you’re interested in an early perspective from one of the “founding fathers”.)

So you’re basically saying that a SQL programmer needs to verify that all the fields are either specified as not null or wrap them is isnull(...,0). Hardly an elegant solution. :wink:

I think this gets back to some of @ridgeworks objection to catching errors - it makes things clunky. If I want a version of open/3 that fails instead of throwing an exception, then I need to write something like this:

maybe_open_read(Path, InputStream) :-
    catch(open(Path, read, InputStream, [type(binary)]),
          error(existence_error(source_sink, Path), _),
          fail).

Which is OK for relatively infrequent operations such as open/3; but for things like is/2, its definitely clunky.

The behavior of null in SQL also raises the point that just because there is a simple “rationale” for how it handles null – in most cases, null acts like bottom (), so any operation with null results in null – doesn’t mean that programmers can’t be surprised by it. It’s like the argument over exceptions … C takes the approach that programmers need to check every return code (except they often don’t); Go takes the approach that there’s a common idiom (f, err := os.Open("filename.ext"); if err != nil { ... }) which is clunky; Python/Java/etc take the approach of throwing errors and allowing them to be caught. I prefer the Python/Java approach (as long as programmers don’t silently catch errors and keep going – something I’ve seen all too often in “production” Java code).

Global flags have a way of biting; what happens if I combine two modules, one of which expects X is foo+1 to fail and the other that expects it to throw an exception?

Even module-level flag can be problematic. Maybe module-level, predicate specific is the way to go; but I don’t see how it’s much different from defining purpose-specific predicates such as

maybe_open(Path, Mode, InputStream, Options) :-
    catch(open(Path, mode, InputStream, Options),
          error(existence_error(source_sink, Path), _),
          fail).

which can be tailored for exactly the kind of failure I want (e.g., throw an exception on an uninstantiated Path, but fail on an existence error).

For the case of is/2, it could be that the overhead of writing a version that uses catch/3 is too high, so it might make sense for the system to provide is_or_fail/2 to avoid the inefficiency.

1 Like

Sure.

And you could redefine open/4 to return either a stream term or an error term and then write code like this:

open(Path, read, [], StreamOrError),
(   is_stream(StreamOrError)
->  read_term(StreamOrError, Term, [])
;   % handle the error case
).

although that merely defers the question about the behavior of

open(Path, read, [], StreamOrError),
read_term(StreamOrError, Term, []).

when the open fails – should read_term/3 throw an error or quietly fail?

In the end, I don’t care about consistent theories, whether of error-as-failure or null-as-bottom (in SQL); I care about writing programs that work as correctly as possible and I’m more likely to do that if I can reduce my cognitive load. If when I’m writing SQL, I need to remember to wrap my aggregators with isnul(...,0) (but only when a field isn’t defined not null), so that means I’ve used up a few precious thought particles that would have been better spent on thinking about other aspects of the problem. “A programming language is low level when its programs require attention to the irrelevant”; and if I have to constantly worry about surprise failure (in Prolog) or special behavior with null (in SQL), then I’m being forced to pay attentiont to the irrelevant (at least, irrelevant to the over-arching problem that I’m trying to solve).

PS: I’m sure that there’s a nice algebra for SQL’s handling of null. But that’s irrelevant: I have certain expectations about sum and +; and SQL violates those expectaitons with nulls. Far better to have something like Haskell’s Maybe and type checking to ensure no mistakes (in the case of SQL, that I’ve wrapped all potential nulls with isnull(...,0)). I suspect that I could fullfil @ridgeworks’s preference for failure over exceptions by some kind of type checker for Prolog – but it’s unclear to me what a type theory for Prolog is (I intend to investigate this in the near future and if I find anything interesting, I’ll report it here).

I’m not arguing that global flags don’t have issues and it’s not hard to come up with situations, e.g., merging code, where this has to be addressed. I will point out that we already have this issue in spades if you look at the long list of existing configuration flags, iso for one, or the various float_* flags, and probably others which are changeable. I know for clpBNR there’s a documented assumption regarding the settings of critical flags. Probably any module that has this kind of dependency needs to do the same, but I don’t off-hand know of any other examples.

Arithmetic is one, but I may be in the minority who is critically dependent on arithmetic for performance. And I’ve long since removed any necessity to use catch. I expect most don’t even bother to optimize arithmetic evaluation (using the environment flag) . But I kind of wonder if this isn’t the tip of small iceberg, e.g., what about functor, arg, univ and friends.

Aside: from documentation for arg/3

The predicate arg/3 fails silently if Arg = 0 or Arg > arity and raises the exception domain_error(not_less_than_zero, Arg) if Arg < 0.

Does this seem consistent to you? As a programmer are you likely to treat the two cases any differently?

The other concern I have is about code clutter and enabling a clean “style”, e.g., by avoiding what O’Keefe calls “meta-logical” predicates which

discriminate between variables and other terms, a distinction which is completely meaningless from a logical point of view, and can only be understood with reference to the current “state” of a procedural computation.

These are commonly used to avoid conditions which result in errors, which is why I’d like them to be (optionally) part of the predicate that generates the error rather than the responsibility of the caller.

But as I said at the very beginning, this is a pet peeve, not a showstopper (somewhat like ill-formed lists). And the horse looks pretty dead to me right now.

And perhaps this is where we have a difference in perspective. To me failure is never a surprise - it’s a logic programming language and failure is to be expected. I also tend to develop and test in small chunks, so “unexpected” failure rarely comes from code under development; I will have bugs so failure is completely expected. It also comes from mistaken assumptions about some library I’m using, so I tend to be quite cautious about using libraries I don’t understand. But even these tend to reveal themselves in unit testing of own code. (Probably worse than failure is success that gets it wrong; you can go a long way before you sort that out.)

Now this hardly ever works as well as I’d like, so an adequate debugger is essential, as is some kind of profiling tool. Some kind of type checker might also be very useful; it would allow me to check my assumptions against what’s actually implemented.

To me, failure is a surprise if I think I’ve written a deterministic predicate but there’s a bug in it and it (quietly) fails.

That’s good engineering practice, when possible; but some of my “small chunks” have 50 or so cases (e.g., the node types in an AST). They’re an all-or-nothing situation. And I’ve encountered plenty of other situations where there seems to be no partial solution.

As to failure, consider maplist/3, include/3, convlist/3 – I use maplist/3 the most, include/3 sometimes and convlist/3 hardly ever. I wonder why? – convlist/3 ought to cover all three cases (convlist/3 can emulate include/3 with a trivial helper predicate; convlist/3 gives the same result as maplist/3 if the predicate is deterministic). But from a conceptual point of view, convlist/3 seems to require more thinking effort than maplist/3 and include/3. (BTW, foldl/4 only makes sense if the predicate is deterministic; of course, the predicate can choose to ignore some items, just as SQL’s sum can behave differently depending on whether ifnull(...,0) is used or not.)