If true, that’s a pity. It may be because they’re not logical. One reason, they throw exceptions. If we can make them logical, they might be used more in pure logical code. Clearly some built-ins can’t be logical by definition (side effects, etc.), but others have no such excuse.
I also have to admit that, in my experience, “pure logical code” is a bit of a rarity as well .
As a counter example, Haskell goes to great lengths to preserve the pure functional programming model. By comparison, most Prolog’s seem to barely pay lip service to its mathematical foundation. Maybe most users don’t care (I used to be one of them), but I’m just trying to tip the scales (slightly) in the other direction.
I prefer to think of thrown errors as a mechanism for detecting at run-time things that I’d prefer to catch at compile-time, if I had a sufficiently sophisticated analyzer, e.g., X is 1 + 'a' is clearly an error, and I don’t want it to quietly fail.
My experience with debugging production Prolog code (written by other people or myself) is that the most common bugs - and the most annoying to debug - are due to silent failure when a value isn’t in one of the expected forms … something that might be detected by a Hindly-Milner style type inferencer, for example. That’s why I use rdet a lot – it lets me easily define predicates as “mustn’t fail”; but I don’t remove those checks for production code. (However, rdet is a bit clumsy to use; I’d rather have something like the Picat-style syntax that Jan has recently introduced.)
No, I’m not confusing the two. I appreciate that pure logic programming is an aspirational objective that certainly isn’t met by Prolog; in fact they seem to be moving ever farther apart, which I think is a great shame. Again to quote Stirling and Shapiro:
A good Prolog programming style develops from thinking declaratively about the logic of a situation.
Anything that impedes such thinking is an issue IMHO.
I’ve written enough Prolog over the years to understand why people may see this is a problem. On the other hand, as a programmer, I’d like to have control. Prolog is a dynamic language, X is 1 + 'a' may not be an error, e.g., if a is a user defined arithmetic function.
Failure is a natural consequence of programming in logic, you use it every time you write a predicate with more than one clause or an if-then-else. And if you want to write predicates that never fail, or fail noisily, that’s your choice; just add a clause at the end to deal with it and make it explicit. It’s just that built-in and library predicates that throw exceptions take away that choice unless you add additional code to prevent it. Just more obfuscation which doesn’t really help, IMO.
But maybe this all just boils down to personal choice. I like programming in logic, but to each his own.
I prefer not to redefine terminology to fit my purpose. Feel free to edit https://en.wikipedia.org/wiki/Prolog. If successful, let me know and I’ll consider changing my position.
This might have been a bad example. A better example might be a predicate that expects a proper list and instead being given [a,b|z] or [a,b|Z].
I suppose part of our disagreement is over what is meant Prolog being a “dynamic” language. Prolog is definitely typed (as are Python, Lisp, Haskell, etc.) but two different execution paths (or backtracks) might result in two different types for a variable. I prefer to think of this as a “union” type, in which case a value that’s not part of the union type would be an error.
But over the years I’ve changed my opinion on this many times, and might change my mind again tomorrow based on new experience (pragmatics, as it were). Also, I’ve found that treating “error” as a kind of type lattice bottom (⟂) raises its own set of problems.
I’m not sure why the Prolog parser accepts these as lists in any case. I can’t find any built-ins other than univ, that can handle them properly. They do have some odd unification semantics:
?- [a,b|z] = [a,b|Z].
Z = z.
?- [a,b,z] = [a,b|Z].
Z = [z].
?- [a,b|z] = [a,b,z].
false.
?- member(X,[a,b|z]).
X = a ;
X = b.
It’s a bit difficult to actually construct one dynamically other than using the tricks above. And as the last query demonstrates z is MIA; not even a choicepoint and silent fail. How did we get here? (I have to confess that I previously used a Prolog dialect that didn’t do lists this way at all, so this was a bit of a surprise.)
Anyway, I don’t doubt there are good examples that demonstrate your point of view. I’m just trying to argue that there might be another equally valid point of view and the core system shouldn’t invalidate it out of hand by assuming that its definition of an error is the only possible one by generating an exception.
I’m not talking about legitimate errors, e.g., running out of memory, etc., and I’m certainly not objecting to anything that points out “style” errors, e.g., singleton variables, either statically or dynamically, however you choose to define “style”, as long as it doesn’t restrict programmer choice.
Not too long ago floating point overflow was an error/exception; now it can (optionally) result in an infinite value (and succeed!). This can make a huge difference in arithmetic performance at the application level. I just don’t understand why this shouldn’t be generalized to lots of other cases.
Maybe this is just a difference in perspective but I don’t see Prolog as a typed language at all, and that’s a good thing (again, IMO). Atomic values have types, and I suppose the functor of a compound term might be considered its type, but a logic variable has no type. It can take on any value (in the Herbrand universe defined by Prolog) - that’s a pretty big union. And unification, the fundamental mechanism in resolution, really isn’t about type equivalence at all.
Now you’re starting to sound like an old colleague, Bill Older. I never did understand Bill a lot of the time but he was a pretty smart guy.
[a,b|Z] = [a,b,c,d] seems a reasonable thing to say in Prolog, resulting in Z=[c,d]; so I don’t see why the parser shouldn’t accept that.
As for strongly-typed … there isn’t good agreement on terminology, but I would call Prolog as strong-typed, C++ as weakly typed (because of the various kinds of casts) and BCPL as untyped. C++ is statically typed; Prolog is dynamically typed.
As for the type of a logical variable – there are some papers on this (which I’m still trying to absorb), but I would suggest that it’s possible for many programs to add constraints to variables so that assigning another type will either fail (probably @ridgeworks 's preference) or throw an error (my preference).
If a predicate throws an error, it’s easily converted to one that fails; the converse is less easy because the error can contain information about the failure but a failure is just a failure. Here’s are examples from a real program:
safe_delete_file(Path) :-
% The most common error is: error(existence_error(file, Path), _)
% but other errors are possible, such as
% error(instantiation_error, _).
catch(delete_file(Path), _Error, true).
I think the main reason for that is that many Prolog developers lack the training to use Prolog relationally/logically – treating it as an imperative programming language.
I was surely one of them, until I had a long email conversation with Markus Triska – who helped me make the paradigm shift to thinking (cut-less) relationally.
Perhaps, the issue is that functional programming paradigm has seen much more mathematical grounding work made popular than relational programming and its logic paradigm applied to programming.
Yes, I totally agree with this. Coupled with the fact that most such developers come from a long background of experience using imperative languages, so everything looks like a nail. We all must understand the procedural side of Prolog to write any real-world programs, but I think not understanding, and exploiting, the declarative reading is missing an opportunity.
Fair enough. I think that’s because Z can be unified with a proper list. So that leaves [a,b|z] as the odd duck. In fact anything after the | other than a single variable or a proper list is suspect; is that right?
No I think a failure can be anything you want if to be, including throwing an error if that makes sense in the context of the program. But you have to be prepared for it. And success can report abnormal conditions in extra arguments, if that makes sense. The usual problem is that it’s difficult to discern exactly what are the conditions for success and failure on the predicates you use.
This largely boils down to a matter of style. My pet peeve is that I feel that somebody’s else style is being imposed on me and I have to add code to workaround it, either in the form of guards (which are usually cheap) to fail prior to the call, or by wrapping everything in a catch (usually expensive) around the call. And as your two examples demonstrate, this work is often done just to turn it back into a true/fail result.
In addition to the matter of style, I think we mostly need some clarity on the behaviour of builtin and library predicates, in particular:
what conditions result in success
what conditions cause failure
what conditions generate errors, and what are the types of those errors
how does the predicate behave on backtracking (determinism)
Given that, with some effort, I can workaround pretty much anything. (And I’m as guilty as anybody when it comes to not providing this level of clarity.)
So just let me make the point that failure is fundamental to the Prolog execution model. “Unexpected” silent failure is never a good thing, but it comes with the territory IMO.There are lots of good programming languages which are deterministic and never ‘fail’ so maybe that’s a better option.
There are two basic failure scenarios. Your code is called with arguments that don’t meet your specification for success and you fail to deal with that use-case. That’s on you, but there are fairly simple techniques for dealing with such failures and providing additional information as required. One simple method is just add a clause at the end that only gets called when unexpected conditions arise. Another possibility, often a last resort, is to use the debugger to fill in that information.
The other case is when you call another predicate that unexpectedly fails on you. Perhaps the doc. for the predicate wasn’t clear that a) it could fail, and/or b) what would be the causes of such failure. This case is more insidious because you’re making some assumption about the called predicate that isn’t true and it’s difficult to see past this; at least it is for me. Again the debugger is very helpful to sort this out.
At the end of the day, you usually have to add code which removes the possibility of errors from your program, since most program specifications don’t include sudden termination as a successful outcome. This is typically done eIther by avoiding errors via guard code (at some level) or catching them and implementing some kind of recovery code. The net effect is to provide the right conditions for the program to continue. But there is no free lunch; pay me now or pay me later.
My only wish here is to have the opportunity to use a coding style which permits failure as an option without undue overheads. IMO errors as the only option impede this objective.
There is a third option, collect the errors and pass them along. You don’t do anything with the error other than pass it along, but obviously you don’t try and process the data causing the error in further steps. For example think about parsing a programming language that terminates each line with say a ;. If you encounter an error in that line then you just record the error and move on to the next line. When the code is done processing the errors are just handed off.
Typically then the code that called the parser will take the errors and convert them for display. For an example of this idea see Railway Oriented Programming.
One can abuse DCGs to act as an “effect monad”, carrying error information along in a chain. However this requires that every element of the DCG passes the error information along, and in the end this is far less efficient than throwing an error.
The “it’s your fault for passing the wrong information” is fine in theory, but rubbish in practice. I’m just not smart enough not to make mistakes in calling convention and I want to know about it early when I do. I would prefer if this was so early that it was at compile time, but I’m willing to wait until run time if necessary. Debug time is definitely not when I want to find out.
According to my understanding failure has a third scenario.
The failure that you want (expect) to happen – that causes backtracking to search for alternative solutions.
This is fundamental to logic programming and prolog – logic is after all concerned with true and false.
Perhaps, what you are getting at is an extra-logical predicate that should not have any truth value, some kind of void, perhaps.
Edit:
Continuing my half-baked musing – perhaps a void by definition either completes successful and is then deemed true, just to continue processing down the current path. or it raises an exception – there is no false.
Edit 2:
Btw, there is also the option / monad style that returns either true or false, or an error structure … to avoid exceptions and let the outer code deal with matters.
Now I can relate to this. I’m not quite sure what DDL would mean in a Prolog context. Maybe it’s capturing the argument signatures of various predicates, sort of meta-information. Also determinism assertions as you suggest.
One would think identifying predicates with side effects should be fairly straight forward, but generally it’s a transitive relationship, i.e., if a predicate uses a “DML” predicate it becomes a DML predicate. But maybe the internal side effect is completely contained in the called predicate, that isn’t the case. For example findall and friends must use a side effect to collect solutions, but from the outside there’s no lasting evidence, so it looks declarative. Sort of reminds me of Haskell monads, not that I really understand those either.
Regarding “DQL” my general rule of thumb is is commutativity of conjunction preserved, i.e., can I re-order goals and not affect the meaning, but that is up for debate.
So needs more thought on my end, but interesting possibilities.
It’s just that I find this a little difficult to do in practice. Basic Prolog control dictates I either succeed, fail, or give up (throw an error). If I want to avoid the last scenario as being counter-productive to the requirements of the program, then I’m just left with succeed and fail.
Failure implies backtracking; everything gets undone so I can’t pass the error information along that way. Succeeding and including the error info is possible, but it burdens the caller with additional complexity and responsibility. This is probably manageable if past errors don’t affect current processing and it’s just collecting the information to the end, but that seems like a bit of a special case.
The other possibility is that errors just log the information, e.g., to a debug stream and fail, i.e., a noisy failure. Then post processing can deal with error log for display or whatever. (Or maybe the debug stream is the display.)