I am considering Rust for some time now, but not having had a direct FFI was quite a bit of “friction”.
I am really glad you are creating a proper interface that also complies with the Rust philosophy.
I am looking forward to giving this a try …
Dan
p.s. for beginners like me in Rust an Foreign Predicate development, it would be great if you could have some tutorial style documentation to show how the FFI is used — the very simplest – to help get started –
In this example, what sort of predicate would you expect to be the result? Should it take three terms? Where should the output term go? These are things that an ffi will have to figure out if you don’t want to specify them yourself.
For sure, it is possible to create rust macros that will generate the required code to do what you want (assuming what you’d want here is automatic conversion from input term arguments to values, and automatically inferring output terms), but I don’t think that is a good thing. It means the ffi does a lot of implicit stuff, makes a lot of assumptions of how your foreign predicates should look and behave.
Furthermore, by eliminating unification altogether, it’d be impossible to do things like ‘unify this value, see if that works, and if not do something else’. Or, ‘unify these 3 terms to these 3 things, but roll back these unifications if any one of them fail’. Or, ‘take these two partially grounded terms and unify them’ which simply has no equivalent in terms of pure input-output functions. Or, ‘do x if this term reference contains a variable, but do y in case it is ground’, which, again, requires one to know something about term references, and not just the values they contain. Or, ‘do x if this term contains an atom, but y if this term contains a number’, which wouldn’t really work if the ffi did all value conversions for you. Etc.
Like it or not, term manipulation is at the heart of SWI-Prolog (and I assume other prologs as well) and hiding that means crippling the interface. I don’t know how GNU Prolog or SICStus get away with it, but I really don’t see the point. I rather stick closely to what actually happens, which means term manipulation.
That said, if you want something simpler, it is completely possible to use the ffi as it stands and build a macro layer on top of it which lets you specify functions without ever talking about terms or unification. I’ll leave that as an exercise for anyone motivated enough to do so.
For sure! I considered writing a user guide before doing my announcement, but it felt like I was already putting off my announcement for too long. For now, hopefully you can get a long way by looking at the examples that are there, or the reimplementation of terminus_store_prolog.
I hope you’ll give it a try, and let me know what your experiences are with it!
The SWI-Prolog FFI started from the Quintus one that is also the basis of the SICStus one (don’t know about GNU). It consists of a Prolog declaration of the foreign functions (name, types). This was used to generate wrapper functions that are then compiled and linked with the target foreign code. This stuff is still there in library(qpforeign). The SICStus emulation comes with swipl-lfr.pl which does the same job as SICStus sp-lfr. Both interfaces also provide passing as term_t and use C functions to extract from, create and unify Prolog terms.
I decided to extend the set of interface functions, notably with convenience functions that unify directly with native types, so you can unify a term_t with an integer, an atom created from a char*, etc. I like this much better for several reasons.
Yes, for add_two_ints(x,y) the automatic wrapper generation is great. At the moment that the desired Prolog types that are handled get more complicated or the desirable mapping from Prolog to C or visa versa (think about atom vs. string to char* or wchar_t*) things get harder. Quite often you’ll end up with automatically generated wrappers for multiple C functions and Prolog code that switches between those. At least, that was what we found ourselves doing when programming against early versions of Quintus that didn’t yet have term_t back in the 80s.
If one assumes the C function is already there, i.e., a function from an existing library, the automatic wrappers are often not too bad. It is still dangerous though as the functions and there #defined constants (or enums) as well as the types may change and while the library is still source compatible, it is not binary compatible and without anyone realizing the interface is now broken.
In many cases the raw functionality of C libraries is not what you want to expose to the Prolog user. As you design a more high level and Prolog friendly API you just as well write this in C. In part of the above reason, in part for performance and because there is no double declaration that may get out of sync.
Additional generator steps complicate the build toolchain that is needed.
With C++ we can use C++ strong typing and type polymorphism to simplify the interface.
Finally I once wrote the ffi package that allows calling functions in dynamic libraries without a C compiler. To avoid getting out of sync this parses the C header files to get access to the types and constants. Unfortunately it doesn’t work great, notably not for the headers that come with the C library. Many of these headers uses all sorts of compiler specific annotations. GCC already has quite a few of these. Recent Clang versions have even more. This makes the maintenance hard (I think the package is broken on MacOS at the moment). Modern libraries and headers also tend to wrap API calls in inline functions and/or macros. This is fine for a source wrapper based approach, but hard to deal with when directly accessing the dynamic library.
This is one of the best packages I have used, in terms of the ease of connection to the C world. So much so that I was wondering why it was not included in the core.
Even if the headers that come with the C library are not fully handled, it works well enough with it, and most of the use cases are really with third party libraries which provide some functionality, and these don’t tend to use those pesky GCC extensions in the headers.
Thank you for providing that package, and I would really still consider adding it to the core
Good to know it is appreciated Yes, the original plan was to move it into the core. Indeed third party libraries are typically easier to deal with. They often do often include system library headers. As long as the parsers survives and required types do not indirectly depend on hard-to-deal-with internal headers all still works. I don’t know how often that fails?
The original plan was to add it to the core. Trouble maintaining it, notably for MacOS, stopped me from doing so. Should we reconsider?
I think it would be a good idea to have it included in the core, and it can be mentioned in the docs that there are problems maintaining on MacOS.
It is a tremendous advantage to have this package in the core, you can very quickly talk to a third party shared library, and I think it is quite a loss not to have it in the core.