Types with SWI-Prolog

  1. I didn’t know about check/0. Thank you for pointing it out.
  2. It seems too easy to fool it. Or maybe I am missing something?
binary_tree(nil).
binary_tree(t(_, L, R)) :-
    binary_tree(L),
    binary_tree(R).

foo(X) :-
    binary_tree(X).

bar :-
    foo(foo).

The step itself does say “Checking trivial failures” so maybe this isn’t considered “trivial”?

I still would like to know, however, is this a type error or is it another kind of error?

I guess that if you define a type for binary_tree/1, it becomes a type error. I merely wanted to point out there are ways to proof a call makes little sense that does not really involve typing. And yes, the check just checks concrete goals and verifies there is at least one clause that may match. Tools like these do not proof the program is correct (which is in general impossible anyway), but they do proof that the program shows behaviour that is most likely not intended. The compiler also complains on e.g., X == Y if one of the variables is a fresh variable and the goal thus always fails. This isn’t wrong, just most likely not intended. If you want to fail, use fail/0 :slight_smile:

Hi Jan,

since “Prolog with Types” seems to be a broad topic with many (incompatible?) views, I was just trying to get a feel of what you and others consider to be the scope of the discussion.

I am sorry I am not using well-established terminology. I didn’t want to provide undeniable proof that I don’t really know what I’m talking about.

I would like to have a typed Prolog. I suspect one can do a lot at compile time without adding any annotations or declarations.

Something I have observed is that “Prolog with Types” discussions go into topics like determinism and modes. My impression so far is that “simple” compile-time type inferrence doesn’t need to address these issues in order to bring value.

I hope I am making sense.

EDIT: it seems that the package by Tom Schrijvers comes closest to what I mean.

The work by Tom is in the end exactly what I do not want. He designed a type system that proofs type consistency. The price that you have to pay are a lot of annotations to give the reasoner enough material as well as explicit indications that some goal should not be type checked. This type system can also deal poorly with type overloading (handle multiple types in one predicate). This (in my view) results in a de-facto different language.

I want static analysis that finds likely wrong programs with as little as possible user annotations and allowing the user to write Prolog the way s/he likes. If you want to allow for type overloading you’ll have to track the control flow for arguments of different types. Modes and determinism is, at least when I was thinking about this, indispensable. It is simplest form, you must know that atom/1 lets an atom pass and blocks anything else. You also need to realise that atom/1 followed by a ! implies that subsequent clauses won’t see an atom. And yes, such a system can not say “your program is free of XYZ errors”, but as long as it cannot say “your program is correct” I only care that it points at as many as possible errors.

Ciao is definitely the most advanced in this area. Although I never really used their assertion language, I think it got too complicated and typically too slow to be of real practical value. That is why Edison came up with a simplied runtime checking system based on the Ciao assertion language that proved (for him) more practical.

Possibly the sweat sugar doesn’t exist. On the other hand I’m quite sure it is possible to write rather simple tools that will point at a lot of dubious code. Typing is merely one target though.

2 Likes

Inspired by watching some youtube lectures by Barbara Liskov on her type hierarchy theory (which I find much more useful than OOP) I put together the following diagram of what I consider to be Prolog supertypes and subtypes.

The next step is to gather the predicates related to each.

any
├── atomic ; constant
│   ├── number
│   │   ├── integer
│   │   │   ├── nonneg            % Integer >= 0 
│   │   │   ├── positive_integer  % Integer > 0 
│   │   │   ├── negative_integer  % Integer < 0
│   │   │   └── between(U, L)     % Integer >= L, Integer =< U 
│   │   └── float
│   │       ├── rational          % includes integers
│   │       └── between(U, L)     % Float >= L, Float =< U 
│   ├── text
│   │   ├── atom 
│   │   │   ├── symbol            % with or without single quotes
│   │   │   ├── char              % Atom of length 1
│   │   │   ├── code              % Unicode point
│   │   │   └── blob
│   │   │       └── stream 
│   │   └── string                % double or single quoted
│   └── boolean 
└── compound
    ├── callable                  % Can also be functor/0 symbol
    ├── list ; proper_list
    │   ├── list_or_partial_list
    │   ├── list(Type)
    │   ├── chars
    │   └── codes
    ├── tree ; graph            % Not defined as a formal type
    │   ├── cyclic
    │   └── acyclic
    └── dict

I’ve edited the above diagram to remove var after reading @Jan’s comment above:

One of the issues with this is that it blurs the distinction between types and instantiation . var , nonvar and ground are not types.

What this little exercise made me think is that using the documentation for must_be/2 leads to a very silly type system because important things missing here are collections like set, queue etc.

1 Like

Can you provide a link(s). While I do find some of her videos on YouTube, I can’t seem to find the one(s) on type hierarchy theory. :smiley:

Her academic paper is at https://www.cs.cmu.edu/~wing/publications/LiskovWing94.pdf (I must confess I haven’t read it very far).

One of her talks is at https://www.youtube.com/watch?v=GDVAHA0oyJU which covers a lot of ground besides her work on abstract data types.

I generally find she is a brilliant communicator who covers problems I’ve encountered. I regard op/3 as harmful as goto, so it’s nice to hear a Turing award winner articulating clearly why “extensibility” is an awful idea because it encourages unreadable code.

Some other talks on youtube by her are at https://www.youtube.com/watch?v=qAKrMdUycb8&t=3717s and https://www.youtube.com/watch?v=_jTc1BTFdIo.

1 Like

Personally I do not see them as missing. Most programming languages, especially the popular Object Oriented ones have collections but the languages are also not split into many variants so if there is only one official version of the language then adding to it becomes the standard. Prolog has variants such as SWI-Prolog, SICStus, GNU Prolog, ECLiPSe, YAP Prolog, and Prolog has the standard definition, which does not define collections, as such they would not be in the base definition of types.

What might be better would be if each add-on/plug-in that used certain types added some has_type/2 predicates, but then again has_type/2 is not in the standard, so each variation of Prolog may or may not have it.

The real problem is that Prolog is such and old language that is still used today, that it did not evolve along the same lines as other languages.

The other way I look at the whole typing thing is that Prolog is not a programming language for solving problems directly, instead Prolog is used to create an intermediate language, and that language is used to solve the problem. I think of Prolog as having an indicator

that on one end points to fully deterministic code, and at the other end points to pure code.

The more deterministic the code is, the more type checking is needed. The more pure the code the less type checking is needed, and in pure code, AFAIK, there should be no type checking because the code should be fully abstract. Also as you move from deterministic to pure, the arguments move from being either strictly input or output to being both.

The reason I think you don’t see Prolog talked about in this way is because most people don’t think of it like that and there is no line between the code used for solving the problem and the code used for creating the language to solve the problem.

Make sense. :smiley:

I added the image to give some idea of what I am thinking, but I don’t like the colors on the indicator, words such as deterministic, semi-deterministic and pure should be used instead.

Your changes and notes have me rethinking about how I think of has_type/2 and now I am starting to think of it more like Hoare logic instead of typing rules.

2 Likes

Thinking more about that I’ve realized that this might be a valid alternative wrt predicate wrapping for determinism check.

trace(should_be_det, fail)
trace(should_be_multi, fail)
trace(should_be_semidet, redo)
trace(should_be_failure, exit)

I thought this was unusable wrongly believing that Redo was generated also for failure of → condition and alternative clauses (like in trace/0), but fortunately I’ve verified this is not true.

Doing some experiments I’ve verified that trace ruins tail call optimization, is there a possibility to avoid that?

I doubt it if you want to check for one of the exit ports. You’ll have to come back into the wrapper after completion in that case, so no last-call optimization.

I’ve been thinking in the past about efficient low-level implementations for this. Not really long, but long enough that this isn’t trivial. Suppose we have

:- pred p is det.

p :- q, r.

Now, typically r/0 will be called using last-call optimization, so we won’t see the exit or p/0. It might be doable if we could infer r/0 must be det, but we can’t. The fact that p/0 is det merely says that r/0 called the way p/0 uses it must be det. I’ve been faced with similar problems getting the port counts for the profiler correct. Knowing when something is called is easy, but what happens afterwards at the box level is probably not possible (in a reliable way; for profiling some errors might be acceptable) without sacrificing last call optimization.

[note that multi may see a failure if the last answer is produced while leaving a choice point]

I’ve done further experiments and it seems this is a rather effective way to test that determinism expectations are not violated (of course having an exhaustive testsuite).

My remark is that, although to use predicate wrapping is a nice general solution and very useful for debugging complex conditions, I don’t think this it is the right tool for this kind of tracing due to being too much heavyweight. Something integrated with execution (like trace/0 and spy/2) seems a more reasonable choice so to be able to leave determinism check enabled except in the production build.

Concerning the big obstacle of tail call optimization it would be enough to optionally forget the request of pending trace when executing VMI op depart (then relying on a proper determinism check for the called goal).

BTW, noting that O_DEBUGGER stuff is very pervasive also in hot paths I wondered how much gain is obtained compiling the production build with O_RUNTIME, but it seems that such build is rather broken.

If I use a Term=a(b(K,L),d(M)) isn’t this Term a kind of type??? Why it is not a type?? When all variables are ground then Term is an fully instantiated object, isn’t it??

Thanks for pointing at the name. Yes, I think that is the right direction of thought for typing in Prolog. I’m afraid that a decent system should take mode and determinism declarations into account. That is probably a bit of a challenge. I’ve started work on this several times, but never found the time to get beyond a proof of concept that works for a rather restricted set of programs.

In the meanwhile we have the => operator and det/1 declaration that solve a lot of @abramobagnara’s problems.