Prolog inline predicates

But that means you need to do things twice :frowning: All works fine as long as expansion (term or goal) doesn’t have side effects and doesn’t depend on sequences. That should be the majority of expansions. Note that term_expansion can produce a list, including directives. Expansions that wish to do hacky things need to be protected against out-of-order calls from the tools while you probably do not want the side effects either.

True, this is all a mess. It has been for decades and is unlikely to be ever fixed as there is too much legacy in there. One day I hope to eliminate the need for goal expansion using inline predicates. ECliPSe already went that way. Ideally this should be combined with partial evaluation to specialise the inlined code in the caller.

Useful links:

Inline Predicates
ECliPSe inline(++Pred, ++TransPred)
Partial Evaluation and Automatic Program Generation

Offline partial evaluation system for Prolog written using the cogen approach


Personal notes, may not be the same as inline predicates.

Inline predicates remind me of Register Transfer Logic (RTL) used for peephole optimizations.
The Design and Applicationof a Retargetable Peephole Optimizer

They also remind me of f the continuation passing style (CPS) optimizations.
The Essence of Compiling with Continuations

2 Likes

Why do you think that? Cross-referencing data can be collected at compile time and made available by the reflection API, which can then be used by any tools that require access to that data. This means that the compiler becomes the single point where the term-expansion hooks are used. This is not theoretical. Logtalk implements reflection and tools this way. Nothing is done twice. Compilation performance is not an issue either as most cross-referencing data is only collected when a source_data flag is turned on. The term-expansion mechanism design intent was to work at compile time. Having tools calling application expansion hooks at runtime breaks that assumption.

The idea of introducing the inline/2 directive to fix usage of goal_expansion/2 feels too much like a bandaid to allow keeping the term-expansion mechanism as-is (recognizing here that breaking backwards compatibility should not be taken lightly).

The Logtalk compiler also does some inlining of predicate calls (Performance — The Logtalk Handbook v3.72.0 documentation) but it’s more akin to what’s described in your first link to ECLipSe than the inline/2 directive described in the second link. That said, the semantics of Logtalk’s term-expansion mechanism makes the idea of inline/2 mostly redundant given goal_expansion/2. As I mentioned in past discussion on the term-expansion mechanism, those semantics are implementable in SWI-Prolog without breaking current applications.

FYI

I added those links to Jan’s post after it was posted, but since I wasn’t sure they were exactly what Jan was referring moved the links out so that others would know it was something I did so that they would not be considered as being done by Jan.

Normally I add links when it is obvious, such as to predicates or papers, but this time it was not so obvious so I moved them out.

I also made the post with the links a Wiki so anyone can add to it.

Well, yes and no. Tooling examines the source code for two reasons. Notably the GUI debugger analyses the source to figure out the exact clause layout and variable names. This could be done at normal compile time, but that slows down the compilation and wastes a lot of memory. Of course we can go for the traditional route to have a mode to compile for debugging and not (which you take). SWI-Prolog doesn’t make that distinction, which has the disadvantage that we have a quite complicated task lazily to get the debug information, but also has its advantages :slight_smile:

The other is the editor. This analyzes the code indeed at the source level and must call the expansions to do an as good as possible job. It runs the current clause through the expansion on every keystroke. The added value of on-the-fly cross-referencing is IMO enormous. Of course we could also recompile after every change, but this may turn out to be too slow and continuously replacing the code by possibly broken code is probably also too intrusive. Note that you can edit in the same process as your program may be running, so having (re)compilation as an explicit step makes sense. One may also edit a file that is not loaded and that you do not want to load.

3 Likes

Help! IMO inline is a much cleaner concept than goal (macro) expansion. The semantics are way cleaner, it works guaranteed for cases where the expansion is not performed and it is typically much easier to write. In my other life I’m a C programmer and typically I only use new macros for constants, no longer for performance.

term_expansion is a different beast. Yes, we do need that to generate and transform programs and it isn’t always neat.

One of the main reasons it’s cleaner is due to being used in a different way compared with goal_expansion/2 in SWI-Prolog. But by using goal_expansion/2 in a different way (Logtalk provides an example here), I would expect that you would be able to get most of the benefits recognized in the inline/2 directive.

Can you explain that? I only see rather horrible double implementations in e.g., library(apply_macros) and your own library(yall). If a simple inline would do the same we would have gained a lot. First of all guaranteed equivalent behavior and second no risk that goal expansion is bypassed due to lacking declarations or dynamic construction of goals.

All that the ECLiPSe iniline/2 directive does is declaring the predicate to be used for the transformation of a given goal. It doesn’t change the work that needs to be done for actually doing that transformation. In fact, we can write:

:- inline(maplist/2, goal_expansion/2).

goal_expansion(maplist(Closure, List), Goal) :-
    ...

Thus, we would go from the compiler calling goal_expansion/2 to the compiler consulting a table populated by inline/2 directives to know which predicate to call to transform a goal. Of course, this indirection step allows an arbitrary, rather than a fixed predicate to be called. There are some advantages in doing that (related, goal_expansion/2 and term_expansion/2 are not multifile predicates in Logtalk, which solves some of the issues you hinted, providing those advantages).

If that is true, that is not what I want. I want real inlining where the clauses of the predicate are partially evaluated given the already known arguments from the calling context. If the result is not recursive it should be inlined as a control structure. Otherwise, for example maplist/N, a specialized version should be created and called. That is not trivial, but quite doable for a large number of typical cases such as dealing with constants and meta-predicates if the goal is known at compile time and (thus) call/1 and call/N can easily be expanded to concrete goals.

http://eclipseclp.org/doc/bips/kernel/compiler/inline-2.html

The Logtalk compiler does several of those inlining cases automatically (see link in one of my previous replies) and transparently (notably, when using the debugger, the user still sees the original code).

2 Likes

For what it’s worth, I very rarely use term-expansion although I very frequently employ code rewriting. I find it’s much easier to manage all this by explicitly generating a prolog file to read in. To get it started it’s more work, but once the code expander is implemented, it’s much easier to understand and use the result if I can open the expanded code in an editor.

1 Like

I fully agree. It may need some attention to cooperate with the tooling though. A good first step is (as said) to avoid assert/retract from term_expansion/2 if possible. If impossible, disable the expansion rules using the condition

\+ current_prolog_flag(xref, true).

The main problem with the term-expansion mechanism that SWI-Prolog (and other systems) inherited is that it gives users little control over the expansions that are actually applied when a source file is compiled. It depends on the user own expansion rules, expansion rules added by libraries used by the application code, and expansion rules added by libraries loaded but not used, at least directly, by the application code. It’s not easily predictable and doesn’t favor reliable and reproducible builds. This inherently chaotic nature is also quite handy when everything just works. Notably, users simply load and use libraries, several of them doing their own expansions, and, with no additional typing or care, magic happens.

The other extreme is to explicitly tell the compiler that is going to use a single and specific set of expansion rules, nothing else, when compiling source code. That single set may itself combine multiple sets of expansion rules but, in that case, the details on how those multiple sets are combined fully under user control. That requires more typing and more user awareness. Magic still happens but it’s (more heavily) regulated.

The controlled solution doesn’t exclude, however, the chaotic solution. By specifying that the single set of expansion rules to use is in user, or, more accurately, in the {user,system} set (in the specific case of SWI-Prolog), or in a set that also includes specific user-defined modules, the controlled can subsume chaotic setups.

Logtalk uses the controlled solution (via hook objects, which can also be hook modules, to specify where to find the expansion rules). SWI-Prolog (and other systems) use the chaotic solution. Note that controlled/chaotic is not synonymous of good/bad. There’s a whole spectrum of usage scenarios where one or the other approach is preferable in practice. But users should have the option to choose between the two approaches.

1 Like

This is an interesting topic. I hope it is OK to ask a few questions.

If I understand correctly, the topic is term/goal expansion, code re-writing, meta predicates, and so on. The discussion is both on the level of “why do we need it?” as well as “how do we implement it?” Is that correct?

There are two seemingly conflicting opinions in circulation, and I am summarizing them as I have understood them (so probably I am wrong):

  1. Code re-writing in general points to a weakness in the language or the implementation.
  2. Code that writes (code that writes (code …)) is the ultimate weapon of the programmer and it is a highly desirable property of a system to support that.

Are those two statements relevant to the discussion? Are they really in conflict? Is there any resolution in sight?

I will now list a few known examples of the need for inlined predicates, code rewriting and so on. Can you explain what would be you desired final, perfect solution for each case?

  • “Anonymous”, one-off predicates, or “lambdas” as in library(yall);
  • Meta-predicates in general, as in “algorithm as argument to an algorithm”;
  • Meta-predicates as re-written by library(apply_macros);
  • DCGs, dicts: both seem to be cases of “syntax sugar”, but writing code without them would be just silly;
  • Re-writing for pragmatic reasons.

Is the last one the more general case of DCGs and dicts as well as library(apply_macros)?

Other examples would be using term expansion instead of memberchk/2 (at the bottom). Another related example would be expanding the definition of a DFA to code that implements the DFA (where predicates are states and predicate calls are transitions), instead of interpreting it. The common in all examples is that the expanded code behaves better (determinism, backtracking, efficiency…) while the interface (what the programmer has to write) stays sane.

Usability: the only important question for me personally is: if I use mechanism X (for any X), can the compiler/IDE still help me out? Spot errors? Provide useful warnings?

(The last paragraph above comes from personal experience. In the few code bases I have seen, in every single one, there is a complex code generation/transformation lurking somewhere in the depths, and it provides 90% of the business value. It also has to be regularly “improved”.)

2 Likes

Several interesting questions in your post. Provided here some notes on the following questions:

In the particular case where expansion rules don’t change semantics but strictly improve performance, testing should be able to run code that benefit from the expansions with and without the expansions rules being used. To exemplify, in SWI-Prolog, that could mean running tests with library(apply_macros) loaded and absent. Or using different values for the optimization flags and command-line options. In Logtalk, that could mean running tests in in debug, normal, and optimal mode (inlining and other optimizations are applied depending on the compilation mode). Whatever the case, comparing test results may help uncover expansion bugs (but not, of course, prove their absence).

But, when testing exposes a bug in the expansion rules, debugging may not be trivial. As Alan noted earlier, generating a file that you can open and examine can greatly simplify debugging compared with compiler in-flight code expansion (in my particular case, the Logtalk compiler generates intermediate Prolog files, which can be examined). In some cases, introspection predicates such as clause/2 and listing/0-1 can help, specially if there’s a one-to-one relation between source file terms and expanded terms. Your post strongly hints to the need of better debugging support for code expansions. But it’s not clear to me how much help a generic, system provided, solution could help in “complex code generation/transformation” cases.

Having implemented some common monads (optionals and expecteds), I don’t see how inlining would be relevant for implementing them. In the two cases I mention, the implementation translates to interfaces/protocols, i.e. a set of predicates. Do you have some specific monads in mind? Or are you hinting about using inlining simply to optimize the performance of monad interface predicates?

Note: The post the question was quoted from was deleted. After reading more details about inline predicates and partial evaluation, I was able to understand that my post was not reverent and so removed it so as not to add confusion to this topic.

Not at present.

No, as at present I don’t use those.

I disagree with the first, and take the CommonLISP perspective, which is your second. The language cannot be all things, but thanks to code rewriting, the coder can make the language become anything they want it to be.

I don’t think there is any question here of “why do we need it?”.

3 Likes