Single Sided Unification rules manual entry -- I am reading it but don't graps it ...why?

Hello,

I am reading the Single Sided Unification rule manual entry, but don’t manage to grasp from it, what SSU is.

To help me out i started to read the Picat users guide [1], section “1.2 Defining Predicates” where the difference between => and ?=> is described – and it seems much simpler to follow – but, i am still not sure that i get it – how exactly does pattern matching differ from unification – and why does Picat introduce horn causes above and beyond what they already have.

Also, the sum/2 example in the Prolog manual doesn’t seem obvious to me – what mechanisms are at play to catch all those errors and why does it matter when considering the semantics of => …

The following seems to be key here: but reading this – is too terse for me to understand what is actually being said:

" The subsumes_term/2 guarantees the clause head is more generic than the goal term and thus unifying the two does not affect any of the arguments of the goal. This implies all output unification must be done after the head unification."

Any further explanations would be much appreciated,

Dan

[1] http://retina.inf.ufsc.br/picat_guide/#x1-50001

1 Like

I cannot say so much about SWI-Prolog’s version of =>. However, here is a little desciption about Picat’s version.

  • The reason => and ?=> was introduced in Picat (via B-Prolog) is because it make more sense that non-determinism (?=>) should be stated explicitly instead of as a default behaviour (as in Prolog).

  • The main difference - from a user’s perspective - when using => & ?=> in Picat it that variables in the head does not unifies. E.g. this don’t work in Picat:

    p(X,X) => true.  % don't work as expected
    

    Instead one must create two different variables and then unify them in the body. E.g.

    p(X,Y) => X = Y.
    

    This extra step is easy to forget when coming from a Prolog background. Trust me on that. And when porting Prolog code it was a lot of work to convert :- to either => or ?=> depending on the logic of the predicate.

  • The reason Horn clauses (“Prolog style” predicates) was later introduced in Picat version 3 was that Neng-Fa realized that there is a huge amount of Prolog code available that would be much easier to port to Picat if Picat supports Horn clauses (together with arg/3, functor/3, ‘=…’/2, \+, and !). I agree, it’s much easier, but sometimes it still can be quite much work, since Picat don’t support op/3, one have to $ escape terms so they are not interpreted as functions, etc.

(End of Picat stuff.)

BTW, the latest release of SWI-Prolog has an optimization that allows this style for regular clauses, with no loss of efficiency (quoting from the release notes):

Compile
p(X) :- X = f(_), g(X).
such that the X = f(_) unification actually happens as part of the head unification (which is faster) and clause indexing works on such clauses.

Thank you.

But, i am still confused, when trying to parse (with my brain) the explanation.

Perhaps " This implies all output unification must be done after the head unification.” – is key here – in the paragraph i mentioned above:

Edit: below paragraph is probably incorrect – so what and how exactly does subsume check

I guess, this is achieved by having Pattern built out of An independent variables – without any variables bound to each other, and having subsumes_term(Pattern ,Args) occur before, the Guard, and the assumption (understanding) that this requires Vn’s to be non-bound to each other, at this stage also, to succeed.
end of edit

(If this is the case, perhaps best if this could be added in one sentence in the documentation, since for me this didn’t jump out to me immediately, making the example “expansion” as an explanatory device obscure to me).

So, (party of the) semantics of => implicitly forces unification to occur in the body – although it may also occur in the Guard.

Now – why exactly is deferred unification so important, when creating a deterministic predicate?

There is the classic example presented in the documentation: max/3 – where delayed unification enables correct behavior – and a solution that build on unification of head, leads to an incorrect one-- but, what is the underlying and general reason – that delayed unification is necessary for deterministic predicates to achieve correctness.

Can an explanation be added – instead of citing just one example – what is the principle here?

Edit: i guess its somehow related to pattern matching vs. unification in the head – with pattern matching a weaker form of unification – perhaps a definition for pattern matching in the sense here would be very useful to have.

thanks,

Dan

p.s. I am not getting far when I read the paragraph:

Clause Head – seems to refers to p(V1,V2,…,Vn) or to Head or both?
** goal term ** – seems to refer to Body or – perhaps to a caller goal term, not shown here – perhaps
p(V1,V2,…,Vn) is a called term that is more general than the goal term – not shown but implied?

It is implied that Head, Guard and Body include (some of the) variables V1, … Vn.

Head, Guard => Body.

“This is semantically equivalent to the Prolog clause below. The subsumes_term/2 guarantees the clause head is more generic than the goal term and thus unifying the two does not affect any of the arguments of the goal. This implies all output unification must be done after the head unification.”

p(V1,V2,…,Vn) :-

  • Pattern = p(A1,A2,…,An),*
  • Args = p(V1,V2,…,Vn),*
  • subsumes_term(Pattern, Args),*
  • Pattern = Args,*
  • Guard,*
  • !,*
  • Body.*

p.s.

Btw, the phrase Output Unification is also at first not fully clear – because – cognitively (and didactically), there are in my mind several jump here – here is what needs to be understood to understand this phrase:

  1. The head of a Prolog clause can have arguments that are variables
  2. Those variables can be treated by the caller as input – when bound by the caller – or output, when not bound by the caller
  3. when they are unbound by the caller, and hence treated as output variables, the expectation is that in head or the body of the prolog clause they will be unified (and hence bound).
  4. if the above occurs, then there will be an Output Unification occurring somewhere.

Taking all the above together, for the => semantics Output Unification must occur at or after the Guard.

Edit:

Note the term Output Unification doesn’t appear anywhere else in the Prolog documentation, so this is the first time this term is used (at least per the search) – a google search leads to a a hit within a book on programming language theory, with a foot note with a definition:

“In contrast to input matching one speaks of output unification to refer to a unification of two terms which appear in an output mode argument (of a literal w.r.t a given program). This … stresses that for the unification of terms occurring in output mode arguments there is no restriction of the direction of the binding (from goal to clause head (output) or vice-versa (input)”

So, this leaves the reader to make an (educated) guess what is meant here.

Edit: with all the above – perhaps I now start to understand why => is called single sided unification – since for correctness, unification is not allowed to occur in the head, but must occur in the Guard or Body of the overall rules clause.

That is why this optimization is not performed for dynamic code. clause/2 is not supposed to work in general on static code. It does for SWI-Prolog, but there are certain limitations such as the above and other optimizations performed by the compiler that may cause clause to return a term that differs from the original. Note that the above is good enough for meta interpretation.

Technically it can occur in the Guard. This is bad style though. Unfortunately it is not feasible to check this efficiently. There are two alternatives: only allow for safe predicates (type checks, comparison built-ins) in the guard or allow for arbitrary guard goals and hope for the user not to break the rules. For now the latter is what is implemented. I think that will remain the case. We can provide a term_expansion/2 rule set to restrict the guard.

To ensure the predicate is steadfast: for any output argument A, and value V it behaves the same whether called as f(A), A = V or f(V). Called the latter way, V is unified with the argument as part of the head unification. This unification may fail, which causes us not to reach the => (or !, etc) and continue with the rest of the clauses. If the corresponding argument in the head is a plain variable, it is always bound to the (possibly instantiated) output argument. This binding cannot fail and we always reach the => if the input arguments (and guard) matches.

1 Like

Just for terminology purposes.

Is it correct that deterministic in this context (or perhaps in all contexts) is the same as functional?

I guess, not, since binding can be non-functional …

So, to actually obtain a function it does require an additional restriction – somehow – but, these are the building blocks for getting at a function - if this is the intended use of =>

Dan

Ok, cool, indeed, the optimization is switched off for dynamic:

:- dynamic p/1.
p(X) :- X = f(_), g(X).

?- clause(p(g(A)), B).
B =  (g(A)=f(_23292), g(g(A))).

Oki Doki. The release notes didn’t say so.