When and whether an application defines an error to be an acceptable result is a decision that I’m happy to leave to the programmer. And, as in other programming languages, using exceptions to perform deep exits with useful information is a legitimate usage. My main issue is with primitives (and, to a lesser extent, libraries) that throw exceptions.
I struggle to find an example of an application where a primitive exception is a useful application outcome, e.g., a parser error indicating a type or domain exception in some primitive is hardly helpful. If you accept that, then unless the input is restricted by requirement, additional code is required, either in the form of guards or a catch/3
wrapper.
In effect, guards preempt exceptions by turning them into failures. They’re usually cheap (is_list
might be one exception), but impose an additional load on the programmer (unless you consider this useful documentation for the guarded primitive). They’re also redundant in the sense that they’re performing the same checks as the primitive (if only it would fail in the same way).
Catch wrappers can be relatively expensive when compared to the execution cost of the wrapped primitive (ignoring primitives with side-effects, e.g., IO) as well as being rather hard to read (IMO).
So the end result is that I’ve added clutter to the code and impacted performance (to some degree) to achieve the desired application result, even if that result is an error with more useful information, e.g., your parser example.
In my experience, about the only time a primitive exception is actually useful is during development when code is buggy or incomplete. Silent failures due to programming errors are common then, and only some of them are circumvented by primitive exceptions. I would suggest that primitive “noisy failures”, perhaps enabled by the debug flag, would be just as useful while debugging.
Summarizing, exceptions just punt the problem up to higher level code, either as alternative clauses or error handlers. One way or another, turning them into something “callable” imposes a penalty.
Also, purely for aesthetic reasons, code which has a declarative semantics is IMO more elegant and easier to understand. Errors have no declarative semantics, whereas failure does. For example, if X is 1+a fails, it means there is no X which holds for this goal. So I think it’s unfortunate that non-declarative semantics have been built in to the lowest levels of Prolog systems. (Instantiation errors are a whole other topic.)
I have no expectation that this discussion will lead anywhere but, as I said at the onset, it’s more of a pet peeve than a showstopper. A global flag which affects primitive semantics seems unworkable, although perhaps a per-module flag is conceivable. A parallel set of some of the primitives, possibly used with in-module goal expansion, is another approach but seems like a lot of work.