Ann: Paper about Web Prolog

Indeed. The 2-3 slowdown is not too bad. The current image is 1.3Mb, which isn’t too bad either. I’m not too sure WASM will see mature enough multi-threading anytime soon and even whether we will see threading at all. SWI-Prolog is fairly demanding on its requirements for reliable and scalable thread APIs. Surely you can still use the client API and quite likely it is possible to build a multi-engine Web Prolog system using I/O based task switching.

2 Likes

Indeed. Peter van Roy was at the Erlang’19 workshop when I gave my talk. He commented that the Web Prolog approach seems to be able to provide the Semantic Web with what it’s currently lacking - an architecture.

The moment after he made that comment, I showed the audience the following slide:

So, yes, I think Peter van Roy may be right. :slight_smile:

In this paper, Wielemaker et al show that since Prolog is a relational language, just like the Semantic Web languages, the notorious object-relation impedance mismatch problem can be avoided when building applications. They also show that Prolog is in many ways a better query language than SPARQL. Here’s a quote from the paper:

We do not consider SPARQL adequate for creating rich semantic web applications. SPARQL often needs additional application logic that is located near the data to provide a task-specific API that drives the user interface. Locating this logic near the data is required to avoid protocol and latency overhead. RDF-based application logic is a perfect match for Prolog and the RDF data is much easier queried through the Prolog RDF libraries than through SPARQL.

For purposes related to the Semantic Web, Jan’s current work on tabling is also important. Perhaps Web Prolog can be “sold” as a semantic web logic programming language? In a paper with the title Semantic Web Logic Programming Tools, the authors argue that Well-Founded Semantics (WFS) provides the right basis for such languages, and this is something that tabling makes possible. Indeed, SWI-Prolog already supports WFS - see this entry in the manual!

I don’t think this necessarily means that tabling has to be a part of Web Prolog as such - it might be one of those things that must be delegated to a host language that can deal with it.

1 Like

Thanks for the slide, very interesting.

There are various semantic underpinnings for logic programming, with well-founded semantics one of them (stable model semantics / answer sets is another one – if i understood that correctly).

I feel best to keep a simple and standard (ISO?) Prolog baseline, as a light-weight and easy to learn and use logic programming language, and enable other (more heavy weight) semantics, as further options – for those use cases that require such additional expressiveness and related formal reasoning (and related reasoning guarantees).

Perhaps also a comment on Prolog as a knowledge representation language:

While Prolog is essentially a rule-based knowledge representation language – its a very basic knowledge representation language – which is also its strength and versatility.

With prolog one can also go further …

Logtalk, which compiles to Prolog, provides higher level knowledge representation abstractions such as Objects and others – and then there are OWL variants implemented in Prolog, as well as the ability to import and link to OWL triple stores.

Dan

2 Likes

That old paper described Logtalk 1.x, an experiment in computational reflection. The implemented mechanisms that you mention were dropped for Logtalk 2.x/3.x, no longer an experiment but a language with little in common (other than the name) and no code shared with 1.x.

The co-author in that paper was my Master’s degree advisor. He helped review the paper and the thesis but had otherwise no significant participation in the creative process that resulted in the Logtalk 1.x prototype. There was no collaboration with him in any shape or form in the creation and development of Logtalk 2.x/3.x .

2 Likes

As someone unfamiliar with pengines and Erlang, but has built a fairly complex web application using SWI-Prolog, I’m a little confused as to what problem this solves.

I’m communicating between the Javascript in the browser and the Prolog server via Json like so:

move_handler(Request) :-
  member(method(post), Request), !,
  http_read_json_dict(Request, DictIn),
  game_manager(DictIn, DictOut),
  reply_json_dict(DictOut).

So far, I’ve got a chess player at http://www.newsgames.biz/chess and a draughts player at http://www.newsgames.biz/draughts working using this simple code. (They are not exactly tough players yet, but it’s work in progress).

A concern of mine is whether this SWI-Prolog server will hold up if the site gets busy and popular. But so far, so good.

I haven’t spent much time looking at the pengines documentation since I’m not sure if it does anything SWI-Prolog’s “plain vanilla” http predicates don’t do anyway?

3 Likes

You should post these in Useful Prolog references

1 Like

Thanks Eric

What is probably more useful is the basic gameplaying code which I’ve put at https://swish.swi-prolog.org/p/Graphs1.swinb and my notes on writing a web server with SWI-Prolog at https://github.com/roblaing/swipl-webapp-howto which I’ve posted here before, but I’ll add to the references link.

The question is what one wants to achieve with object orientation.

In knowledge representation, such as RDFS / OWL, one wants to compactly capture domain knowledge with semantic correctness guarantees related to provided symbolic inferences. There is no notion of function (in the imperative code sense), method or dispatch.

In object-oriented programming you want to structure your data and function code for stability, scalability and reuse. Its a software engineering approach.

And, as far as i understand, in Logtalk you aim to create a reusable declarative structure for logic programs – i.e. also a software engineering approach for logic programming – with dispatch an extension of call conventions (i.e. the procedural reading of prolog code), while also enabling the taxonomical declarative reading of the code structures.

Dan

p.s. and i guess, historically, object-orientation derived from frame-based systems – based on a theory of mind by Minsky et al. to enable developing AI systems for common sense reasoning.

2 Likes

Using Prolog as a web application language in my view just boils down to translating Json to Prolog terms and back. As far as I understand, SWI-Prolog’s dictionaries are just syntactic sugar for a standard Prolog list along the lines of [key1(value1), key2(value2),…] using a very Json-like notation of setting with Dict = _{key1: value1, key2: value2, ...} and getting with Dict.key1, so shouldn’t be hard to modify for other Prologs.

I’ve been tripped up by using member(key1(Value1), List) to extract elements from a list I assumed was ground, but turned out wasn’t. So I don’t see that as a specific flaw in dicts, just that you need to take care that the input arguments are the type you think they are (a common source of glitches in all programming languages).

In the SQL world, there has been a problem that once the standards committee finally defined a Json type along with a “correct” way to address Json elements, they simply ignored what Postgres and nearly all other implementations had already been doing for decades, so I’m guessing it will just be another standard all users and implementors ignore.

In the Prolog world, it would be nice if the various implementations standardised Json handling. But I for one have written my own home-grown json_terms(?Json, ?Terms) predicate, and it’s not very difficult to do.

Well, there is the language, the architecture and the contents.

The language can be as simple as intelligently handling of json.

The architecture enables transparent concurrent querying across various end-points, transparent paralleism and synchronization of obtaining results, and backtracking as needed.

And the contents, which determines what queries will be submitted, requires structured knowledge and inference across knowledge included in knowledge sources.

You do need all three for end-to-end use cases on the semantic web.

Dan

p.s. btw, not all web implementations are happy with text (serialized into json) transport of data. This is quite a historical artifact of HTML/HTTP.

A serialized to binary approach for data transport is much more efficient.

1 Like

As far as I can see, your post is one of the few recent messages relevant to the topic at hand. It’s a fair question that deserves a longer answer, but for now I’ll just point to SWISH as an example of an application that cannot be implemented using the pattern you use in the snippet above. A web-based IDE for traditional Prolog such as SWISH is a fairly demanding type of application. It really needs to backtrack over the network, and although the conversation between the programmer and the pengine must always be initiated by the programmer using the IDE, the interaction may at any point turn into a mixed-initiative conversation driven by requests for input made by the running query. So it needs something like library(pengines) or Web Prolog to work. Other kinds of web applications, such as your (really impressive!) chess and draughts programs, have no need for mixed-initiative interaction and may be happy treating a server/node as a mere “logic server”.

1 Like

Right, computation – but due to reasoning tractability and guarantees requirements – expressiveness is by design limited – its not a computer programming language. To do more, you add, SPARQ queries, rules, code …

1 Like

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

Apologies Torbjörn for hijacking this thread into an OOP vs frames/RDF/OWL debate, but I have to weigh in on this and I will bring it back to Web Prolog.

Origins

OOP was implemented in Smalltalk-72, although an earlier language implemented inheritance in 1967 (Simula 67), whereas Minsky’s paper on frames was published in 1974. OOP “officially” predates frames.

However, the OOP wikipedia page claims OOP was emerging in the late 50’s and 60’s in MIT’s AI group, where objects were being discussed as a LISP atom with attributes. It omits that Minsky joined MIT in '58: these objects were the predecessors to frames. Alan Kay cited these ideas of objects as an influence prior to working on the design of Smalltalk, whose first version was a message passing language that then implemented inheritance ala. Stimula.

So the origins are a little murky, what is clear is that Minsky considered frames as knowledge representation framework, whereas Kay, the Stimula team, and the rest of the Smalltalk team, were using these ideas to develop a programming language. They were developed with different intentions, but from a common terminology muttered in the AI labs of MIT.

OOP

As already mentioned, OOP developed as a software engineering method. Its ideas of objects relate to components in a software system. Classes are abstract components; instances concretize them. In this way you can get the strange ideas found in design patterns like factory classes and singletons. Classes, by the single responsibility principle, are supposed to be a component that is responsible for a single part of the functionality of the system.

In OOP objects have properties and methods, inheritance isn’t strict. So in OOP you can declare a parent class has some property but declare its child class does not. In ontology, if something is true in the parent it must be true for the child too. The methods can be called on instantiation, on getting and setting properties, and at any other time you wish. As an object is a component in a software system, its methods are what provides its functionality.

Frames

Frames were developed as a knowledge representation technique. Its idea of objects is as classes and instances. A class is an abstraction of something in the real (or imagined) world, something in the domain of discourse. An instance is a concretization of this class. A frame is supposed to represent what is known about one class or one instance of a thing.

A frame, if treated as a black-box, appears to only have an identity, properties, and values of those properties. Yes it can have procedures in value slots, but these are accessed when getting or setting the value in cases where the value is dynamic. The procedures don’t provide functionality, they only provide values. They also make use of subsumption to inherit values from their class or parent class.

Example Use

An example of working with OOP and frames: I want to create a report and print it out. I can query the frames to get the data for the report, then in OOP I use this data to instantiate the “Report” class and call the “print” method. A frame is a knowledge representation method, printing is the execution of some algorithm that doesn’t return a value, thus “print” has no place to be in a frame: “print” doesn’t relate “Report” to any knowledge. The “Report” class is an component in the software system whose functionality is to generate reports, it has no business in ontologically defining a report or the data used in it, that is not it’s purpose. In this example we adhere to the separation of data and functionality principal by using frames and OOP appropriately.

RDF & OWL

To place modern ontology languages in context, RDF isn’t one. RDF only provides triples with URI’s and literals for the purpose of marking up a web document. RDFS was a first run at making RDF into something that could be used as an ontology language, but it couldn’t guarantee termination. That’s where OWL comes in, which draws upon both description logics (hence can guarantee termination) and frames. OWL uses the RDF markup, and can be used with RDFS terms. OWL is also strict with subsumption.

Furthermore, there is now SWRL, which is a declarative rule language for making and storing deductions from an OWL ontology. But, like frames, it can only be used to derive triples, unlike an OOP method, which is a capability of that component in the software system, thus can derive a value or do something else.

There is also a subtle but important difference between OWL and both Frames/OOP. In an OWL triple store things are stored strictly as triples, so “bob age 53” is an edge called “age” between nodes called “bob” and 53. In a frame, in OOP, and in modern graph databases like Neo4J, the properties associated with the subject are stored with that subject, so in this example 53 would not be it’s own node.

Different Design Intentions → Different Implementations → Different Products

Just because in OOP we have things with an identity (although this differs between language implementations) and pairs of relations to things that can be values or methods does not mean that they are equivalent to frames/triples/RDF/OWL with their ideas of identity, relations, and values. Due to the origins, they share a common terminology and some common architecture, but they have forked, for example message passing is a huge part of OOP that has no corresponding idea in frames.

They are designed for different purposes, hence they implement these ideas in different and not always strictly correct ways. Thus they are both used in different ways, their unique implementations being designed to facilitate their use-case. OOP doesn’t follow ontological rigor, but it does allow the composition of software systems. Frames/OWL should/do follow ontological rigor, but their use of procedures is limited and should be invisible to the user querying them.

To bring this back to the need for Prolog on the web.

OWL is only a knowledge representation language, Prolog is a programming language. OWL can only have maximum arity 2 relations, but some relations hold between more than 2 things, such as the quality of an entity at a particular time. OWL can’t have complex terms in the object position, so can’t operate on itself to say things like knows(karen, age(bob, 53)). Finally OWL is open-world.

All of these were design choices to cope with the limitations of the web (what if an URL is down? .: open world) and to make it possible for non-specialists to add data without accidentally including some infinite recursion.

But we are Prolog programmers, we can wield the full capabilities of a logic programming language. We shouldn’t be constrained to (at best SROIQ) description logics for our knowledge representation. We shouldn’t be constrained to the open-world assumption as the only available solution to an unresponsive query. Furthermore, we shouldn’t be constrained to knowledge representation. We can, and do, as our predecessors for almost 50 years have done, seamlessly integrate our represented knowledge with programs that put it to use. We should be able to do the same on the web.

6 Likes

thank you for the synopsis …

I think key drivers behind OWL, its profiles with carefully selected expressiveness, was to enable provide reasoning guarantees – the knowledge that a query will return correct results (correctness) in a reasonable amount of time (tractability / decidability) – and all results that can be obtained will be obtained (completeness).

I think that (ironically, perhaps) in a closed database world, such as in an enterprise setting with stable systems such drivers could make sense (ironically, because OWL adopts the open world assumption – to nevertheless give a nod to the incompleteness of the web) …

However, for the web these seem to be very strong requirements, given that the web is open, dynamic, with plenty of incompleteness and contradictions.

I therefore see the case for relaxing the constraints, increasing expressiveness, and providing good enough query responses in a reasonable amount of time. I.e. google like results are good enough most of the time at web scale – which can be achieved by logic programming technologies.

thanks for posting this … very interesting read …

It also highlights the need to have adequate tool support to curate such large an ontology …

I think it was Brachman’s overall programme that eventually moved KL-ONE into description logic, and that divorced representation from message passing, to focus on ontological descriptions via description logic – thereby narrowing the scope.

Dan

The history of knowledge representation is indeed interesting, and it seems that quite a few people here know a lot about this subject. It also looks like most (all?) of you agree that we should stick to logic-based knowledge representation and reasoning in the tradition of logic programming and Prolog as is, rather than build something much more elaborate on top of Prolog. At least this is what Web Prolog is aiming for.

Indeed, Web Prolog can be seen as an attempt to webize Prolog in order to create a web logic programming language, i.e. as an attempt to:

  1. introduce URIs into the language at points where it makes sense,

  2. exploit the existing web infrastructure (e.g. protocols such as HTTP and WebSocket, and formats such as JSON),

  3. make use of existing means for security (e.g. HTTPS and CORS),

  4. make sure the whole thing is scalable (e.g. by adopting an Erlang-ish computational model capable of scaling out not only to multiple cores on one machine but also to multiple nodes running on multiple machines, aim for RESTfulness, effective methods for caching, etc.),

  5. create a standard (e.g. based on a suitable subset of ISO Prolog, but developed under the auspices of the W3C this time rather than ISO, or under a liaison between these organisations)

  6. make sure it fits the web culture (e.g. openness and extensibility and support for communities of interest and practise - perhaps by first and foremost try to establish Web Prolog as a language supporting the Prolog and Logic Programming communities)

Among Prolog systems, and as we all know and appreciate, SWI-Prolog in particular has all the right tools for building server-side support for web applications, in the form of mature libraries for protocols such as HTTP and WebSocket, and excellent support for formats such as HTML, XML and JSON. Therefore, it might be argued that since Prolog can (and has!) been used for building server-side support for web applications, it should already be counted as a web programming language. But since this is true also for Python, or Ruby, or Java, or just about any other general-purpose programming language in existence, it would make the notion of a web programming language more or less empty. We could of course argue that SWI-Prolog is much better at providing what is needed, but I believe we should go further than that, and that Prolog should claim an even more prominent position in this space. Prolog should be used not only for building server-side support for web applications, but should be made a part of the Web, in much the same way as HTML, CSS, JavaScript, RDF and OWL are parts of the Web. In other words, I think it should be possible to make Prolog appear as a web technology in its own right.

3 Likes

The ISO Prolog Core standard is arguably small in scope. Curious about what a “suitable subset” means. Maybe that can be discussed at the level of family of predicates instead of individual ones (ignoring for the moment other aspects of the standard)? There are also de facto Prolog standard features.

Agree with finding um umbrella standards organization other than ISO.

1 Like

Btw, There is the Rule ML standardization effort. Included in it is XSB prolog as an example implementation.

http://wiki.ruleml.org/index.php/RuleML_Home

From glancing through the documents it seems to me that Rule ML is pretty heavy weight and is not geared towards an architecture as described above.

But, it would be important to delineate distinctions and key contributions.

Or, to see how this could be done in collaboration …

Dan

1 Like