Ann: Paper about Web Prolog (Discussion)

Using Prolog as a web application language in my view just boils down to translating Json to Prolog terms and back. As far as I understand, SWI-Prolog’s dictionaries are just syntactic sugar for a standard Prolog list along the lines of [key1(value1), key2(value2),…] using a very Json-like notation of setting with Dict = _{key1: value1, key2: value2, ...} and getting with Dict.key1, so shouldn’t be hard to modify for other Prologs.

I’ve been tripped up by using member(key1(Value1), List) to extract elements from a list I assumed was ground, but turned out wasn’t. So I don’t see that as a specific flaw in dicts, just that you need to take care that the input arguments are the type you think they are (a common source of glitches in all programming languages).

In the SQL world, there has been a problem that once the standards committee finally defined a Json type along with a “correct” way to address Json elements, they simply ignored what Postgres and nearly all other implementations had already been doing for decades, so I’m guessing it will just be another standard all users and implementors ignore.

In the Prolog world, it would be nice if the various implementations standardised Json handling. But I for one have written my own home-grown json_terms(?Json, ?Terms) predicate, and it’s not very difficult to do.

Well, there is the language, the architecture and the contents.

The language can be as simple as intelligently handling of json.

The architecture enables transparent concurrent querying across various end-points, transparent paralleism and synchronization of obtaining results, and backtracking as needed.

And the contents, which determines what queries will be submitted, requires structured knowledge and inference across knowledge included in knowledge sources.

You do need all three for end-to-end use cases on the semantic web.

Dan

p.s. btw, not all web implementations are happy with text (serialized into json) transport of data. This is quite a historical artifact of HTML/HTTP.

A serialized to binary approach for data transport is much more efficient.

1 Like

As far as I can see, your post is one of the few recent messages relevant to the topic at hand. It’s a fair question that deserves a longer answer, but for now I’ll just point to SWISH as an example of an application that cannot be implemented using the pattern you use in the snippet above. A web-based IDE for traditional Prolog such as SWISH is a fairly demanding type of application. It really needs to backtrack over the network, and although the conversation between the programmer and the pengine must always be initiated by the programmer using the IDE, the interaction may at any point turn into a mixed-initiative conversation driven by requests for input made by the running query. So it needs something like library(pengines) or Web Prolog to work. Other kinds of web applications, such as your (really impressive!) chess and draughts programs, have no need for mixed-initiative interaction and may be happy treating a server/node as a mere “logic server”.

1 Like

Right, computation – but due to reasoning tractability and guarantees requirements – expressiveness is by design limited – its not a computer programming language. To do more, you add, SPARQ queries, rules, code …

1 Like

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

Apologies Torbjörn for hijacking this thread into an OOP vs frames/RDF/OWL debate, but I have to weigh in on this and I will bring it back to Web Prolog.

Origins

OOP was implemented in Smalltalk-72, although an earlier language implemented inheritance in 1967 (Simula 67), whereas Minsky’s paper on frames was published in 1974. OOP “officially” predates frames.

However, the OOP wikipedia page claims OOP was emerging in the late 50’s and 60’s in MIT’s AI group, where objects were being discussed as a LISP atom with attributes. It omits that Minsky joined MIT in '58: these objects were the predecessors to frames. Alan Kay cited these ideas of objects as an influence prior to working on the design of Smalltalk, whose first version was a message passing language that then implemented inheritance ala. Stimula.

So the origins are a little murky, what is clear is that Minsky considered frames as knowledge representation framework, whereas Kay, the Stimula team, and the rest of the Smalltalk team, were using these ideas to develop a programming language. They were developed with different intentions, but from a common terminology muttered in the AI labs of MIT.

OOP

As already mentioned, OOP developed as a software engineering method. Its ideas of objects relate to components in a software system. Classes are abstract components; instances concretize them. In this way you can get the strange ideas found in design patterns like factory classes and singletons. Classes, by the single responsibility principle, are supposed to be a component that is responsible for a single part of the functionality of the system.

In OOP objects have properties and methods, inheritance isn’t strict. So in OOP you can declare a parent class has some property but declare its child class does not. In ontology, if something is true in the parent it must be true for the child too. The methods can be called on instantiation, on getting and setting properties, and at any other time you wish. As an object is a component in a software system, its methods are what provides its functionality.

Frames

Frames were developed as a knowledge representation technique. Its idea of objects is as classes and instances. A class is an abstraction of something in the real (or imagined) world, something in the domain of discourse. An instance is a concretization of this class. A frame is supposed to represent what is known about one class or one instance of a thing.

A frame, if treated as a black-box, appears to only have an identity, properties, and values of those properties. Yes it can have procedures in value slots, but these are accessed when getting or setting the value in cases where the value is dynamic. The procedures don’t provide functionality, they only provide values. They also make use of subsumption to inherit values from their class or parent class.

Example Use

An example of working with OOP and frames: I want to create a report and print it out. I can query the frames to get the data for the report, then in OOP I use this data to instantiate the “Report” class and call the “print” method. A frame is a knowledge representation method, printing is the execution of some algorithm that doesn’t return a value, thus “print” has no place to be in a frame: “print” doesn’t relate “Report” to any knowledge. The “Report” class is an component in the software system whose functionality is to generate reports, it has no business in ontologically defining a report or the data used in it, that is not it’s purpose. In this example we adhere to the separation of data and functionality principal by using frames and OOP appropriately.

RDF & OWL

To place modern ontology languages in context, RDF isn’t one. RDF only provides triples with URI’s and literals for the purpose of marking up a web document. RDFS was a first run at making RDF into something that could be used as an ontology language, but it couldn’t guarantee termination. That’s where OWL comes in, which draws upon both description logics (hence can guarantee termination) and frames. OWL uses the RDF markup, and can be used with RDFS terms. OWL is also strict with subsumption.

Furthermore, there is now SWRL, which is a declarative rule language for making and storing deductions from an OWL ontology. But, like frames, it can only be used to derive triples, unlike an OOP method, which is a capability of that component in the software system, thus can derive a value or do something else.

There is also a subtle but important difference between OWL and both Frames/OOP. In an OWL triple store things are stored strictly as triples, so “bob age 53” is an edge called “age” between nodes called “bob” and 53. In a frame, in OOP, and in modern graph databases like Neo4J, the properties associated with the subject are stored with that subject, so in this example 53 would not be it’s own node.

Different Design Intentions → Different Implementations → Different Products

Just because in OOP we have things with an identity (although this differs between language implementations) and pairs of relations to things that can be values or methods does not mean that they are equivalent to frames/triples/RDF/OWL with their ideas of identity, relations, and values. Due to the origins, they share a common terminology and some common architecture, but they have forked, for example message passing is a huge part of OOP that has no corresponding idea in frames.

They are designed for different purposes, hence they implement these ideas in different and not always strictly correct ways. Thus they are both used in different ways, their unique implementations being designed to facilitate their use-case. OOP doesn’t follow ontological rigor, but it does allow the composition of software systems. Frames/OWL should/do follow ontological rigor, but their use of procedures is limited and should be invisible to the user querying them.

To bring this back to the need for Prolog on the web.

OWL is only a knowledge representation language, Prolog is a programming language. OWL can only have maximum arity 2 relations, but some relations hold between more than 2 things, such as the quality of an entity at a particular time. OWL can’t have complex terms in the object position, so can’t operate on itself to say things like knows(karen, age(bob, 53)). Finally OWL is open-world.

All of these were design choices to cope with the limitations of the web (what if an URL is down? .: open world) and to make it possible for non-specialists to add data without accidentally including some infinite recursion.

But we are Prolog programmers, we can wield the full capabilities of a logic programming language. We shouldn’t be constrained to (at best SROIQ) description logics for our knowledge representation. We shouldn’t be constrained to the open-world assumption as the only available solution to an unresponsive query. Furthermore, we shouldn’t be constrained to knowledge representation. We can, and do, as our predecessors for almost 50 years have done, seamlessly integrate our represented knowledge with programs that put it to use. We should be able to do the same on the web.

6 Likes

thank you for the synopsis …

I think key drivers behind OWL, its profiles with carefully selected expressiveness, was to enable provide reasoning guarantees – the knowledge that a query will return correct results (correctness) in a reasonable amount of time (tractability / decidability) – and all results that can be obtained will be obtained (completeness).

I think that (ironically, perhaps) in a closed database world, such as in an enterprise setting with stable systems such drivers could make sense (ironically, because OWL adopts the open world assumption – to nevertheless give a nod to the incompleteness of the web) …

However, for the web these seem to be very strong requirements, given that the web is open, dynamic, with plenty of incompleteness and contradictions.

I therefore see the case for relaxing the constraints, increasing expressiveness, and providing good enough query responses in a reasonable amount of time. I.e. google like results are good enough most of the time at web scale – which can be achieved by logic programming technologies.

thanks for posting this … very interesting read …

It also highlights the need to have adequate tool support to curate such large an ontology …

I think it was Brachman’s overall programme that eventually moved KL-ONE into description logic, and that divorced representation from message passing, to focus on ontological descriptions via description logic – thereby narrowing the scope.

Dan

The history of knowledge representation is indeed interesting, and it seems that quite a few people here know a lot about this subject. It also looks like most (all?) of you agree that we should stick to logic-based knowledge representation and reasoning in the tradition of logic programming and Prolog as is, rather than build something much more elaborate on top of Prolog. At least this is what Web Prolog is aiming for.

Indeed, Web Prolog can be seen as an attempt to webize Prolog in order to create a web logic programming language, i.e. as an attempt to:

  1. introduce URIs into the language at points where it makes sense,

  2. exploit the existing web infrastructure (e.g. protocols such as HTTP and WebSocket, and formats such as JSON),

  3. make use of existing means for security (e.g. HTTPS and CORS),

  4. make sure the whole thing is scalable (e.g. by adopting an Erlang-ish computational model capable of scaling out not only to multiple cores on one machine but also to multiple nodes running on multiple machines, aim for RESTfulness, effective methods for caching, etc.),

  5. create a standard (e.g. based on a suitable subset of ISO Prolog, but developed under the auspices of the W3C this time rather than ISO, or under a liaison between these organisations)

  6. make sure it fits the web culture (e.g. openness and extensibility and support for communities of interest and practise - perhaps by first and foremost try to establish Web Prolog as a language supporting the Prolog and Logic Programming communities)

Among Prolog systems, and as we all know and appreciate, SWI-Prolog in particular has all the right tools for building server-side support for web applications, in the form of mature libraries for protocols such as HTTP and WebSocket, and excellent support for formats such as HTML, XML and JSON. Therefore, it might be argued that since Prolog can (and has!) been used for building server-side support for web applications, it should already be counted as a web programming language. But since this is true also for Python, or Ruby, or Java, or just about any other general-purpose programming language in existence, it would make the notion of a web programming language more or less empty. We could of course argue that SWI-Prolog is much better at providing what is needed, but I believe we should go further than that, and that Prolog should claim an even more prominent position in this space. Prolog should be used not only for building server-side support for web applications, but should be made a part of the Web, in much the same way as HTML, CSS, JavaScript, RDF and OWL are parts of the Web. In other words, I think it should be possible to make Prolog appear as a web technology in its own right.

3 Likes

The ISO Prolog Core standard is arguably small in scope. Curious about what a “suitable subset” means. Maybe that can be discussed at the level of family of predicates instead of individual ones (ignoring for the moment other aspects of the standard)? There are also de facto Prolog standard features.

Agree with finding um umbrella standards organization other than ISO.

1 Like

Btw, There is the Rule ML standardization effort. Included in it is XSB prolog as an example implementation.

http://wiki.ruleml.org/index.php/RuleML_Home

From glancing through the documents it seems to me that Rule ML is pretty heavy weight and is not geared towards an architecture as described above.

But, it would be important to delineate distinctions and key contributions.

Or, to see how this could be done in collaboration …

Dan

1 Like

I intentionally kept it a bit vague. For me, at this point, “suitable subset” can mean anything from the empty set up to something much bigger. The exact scope should be determined by the community. But of course, the bigger the subset of ISO Prolog we start from, the less work it takes to create a standard. My current thinking is that a Web Prolog standard doesn’t have to include predicates for file I/O, OS interaction, networking or storage, as it can rely on the host environment in which it is embedded for such functionality.

If we have actors, then predicates defined in the Threads draft standard are not needed, and it would be great if modules can be simplified, or perhaps replaced with something entirely different.

I’ve been playing around with ways to split a standard up into profiles. Here’s a diagram that shows how it might work:

To understand what the diagram means, you may have to look at chapter 10 of my draft manuscript at https://github.com/Web-Prolog/swi-web-prolog/raw/master/book/web-prolog.pdf .

And yes, dicts and strings might be controversial. We can do without them, but they may be nice to have.

As I write in my Erlang’19 paper, a way to start a standardisation process might be to create a W3C Community Group (see https://www.w3.org/community) as this appears to be an easy way to find out if enough interest can be generated among people of appropriate expertise.

I know that you, as a former member/editor in the ISO Prolog committee, might have a lot to contribute to this discussion, so I’d really appreciate your comments!

I would expect that would be some time until dedicated Web Prolog native implementations surface. More likely, the first implementations would be Prolog systems providing as a library (or libraries) whatever is required to comply with Web Prolog requirements. As such, and to incentive early adoption and experimentation, I would focus on what’s required to run Web Prolog applications rather than on what’s not required but most likely exists as part of most Prolog implementations. Standardization may become important in the future. But it will be easy to get tangled in the present on discussions about e.g. which subset of Core Prolog to adapt. My suggestion would be to start with a RFC-like process for the fundamental parts (e.g. URI representation and handling) and work towards some consensus.

2 Likes

Can you say some further words on scalability achieved, and security and privacy on the web and in enterprise settings.

Ideally, i think, this should entice developing enterprise apps as well – just like, say, J2EE – a requirement i was asked about some time ago …

Also, web prolog on the server and on the client will have different needs, perhaps you want to show this in the diagram as well …

Daniel

1 Like

Yes, I agree. There probably has to be a Web Prolog profile dedicated to the browser.

Yes, these are good suggestions. Building at least one working implementation of a Web Prolog node is important (and for a W3C standard at least two are required).

However, as I see it, creating a W3C community group, and to do it fairly early, might be a good idea too, as a way to elicit input and find people outside the Prolog community (e.g. AI-related groups, or Semantic Web people) who might be keen on the idea. Creating such a group is really easy - shouldn’t take more than an hour or two.

Creating a report that can be passed on to the W3C would take longer, but if we (in 2022, say) have a comprehensive such report ready, plus two proof-of-concept implementations that are capable of talking to each other, then I think the W3C might allow us to take the next step - to create a proper working group.

2 Likes

You built library(pengines) and said it was an informative experience: that you’d do some things differently if you did it again. It seems as though it’s time to take another run at it to get those proof-of-concept implementations running.

It’d be nice if it was using SWI-Prolog again, it might make iterations quicker as Paulo said above. The more frequently something working is put in front of developers, the more ideas can be tested and refined. Time for a bit of Rapid Application Development? Judging by the length of this thread and the fervent debate in it, there’s support in the community. Let’s hack at it!

Regarding that debate, dicts and strings may be controversial in Prolog, but if you want to bring Python/Ruby/JavaScript etc. programmers on board, these are concepts they understand that will enable them to get up and running more quickly. In fact, it’d be wise to expand upon the built-in string predicates to provide conversions of case, splicing, search, replace, etc. Also, given that programmers will likely wish to integrate their web applications with existing API’s that return JSON, the SWI-Prolog dict abstraction provides these non-expert Prolog programmers with a familiar interface that appears to resemble the JSON data structure. Furthermore, they can choose pairs if they wish.

Regarding accessing browser features like localStorage, in Tau it could be like so with library(js):

localStorage(Key, Value) :-
    var(Value), atom(Key),
    prop('localStorage', LocalStorage),
    prop(LocalStorage, 'get', Get),
    apply(Get, [Key], Value).
localStorage(Key, Value) :-
    atom(Value), atom(Key),
    prop('localStorage', LocalStorage),
    prop(LocalStorage, 'set', Set),
    apply(Set, [Key, Value], _).

How browser features are accessed will depend on whether a browser profile is built with JavaScript or WASM, either way it is not necessary to worry about this yet as they will be accessible and packs/libraries can always be authored.

1 Like

I may be completely wrong here, but I have a feeling that semantic web people, who are deeply invested in description logic approaches – its a huge area of research and industry standard and practice, will have a harder time with a Prolog implementation that doesn’t have a strong formal semantics (and, its an undecidedable language).

There is the stream of datalog research with formal semantics – but, prolog is not datalog.

Dan

1 Like

I came to Prolog from Semantic Web out of frustration at the limitations. I’m not alone in that frustration.

3 Likes