You can load s(CASP) into the WASM version. It is one of the demo files:
:- use_module('https://raw.githubusercontent.com/SWI-Prolog/sCASP/master/prolog/scasp.pl').
p :- not q.
q :- not p.
Takes a bit long to load
The SWISH version has nicer and interactive output though.
And yes, SWISH is under maintenance. It is frequently restarted and it seems the automatic restart on issues does not work. More details will probably follow shortly.
Thank you for the explanation of “valuation lattice”, I see what you mean, you mean a quantification of uncertainty that imposes an ordering on events, objects etc.
The way I think about this is that I don’t want to explicitly assign un/certainty values to “things”: programs, clauses or atoms. I like how s(CASP) does it: the “shades of truth” are a semantics, they are not explicitly represented in the code. That is, “¬a” is “certainly false” just because a human reading it knows that’s what classical negation means, in this context. Thus proofs can proceed unburdened from human interpretations of their results, which is efficient and sensible (since we’re asking a computer to complete the proof, and not a human). One can then build on that semantics with further code, and explicitly represent the “shades” in a downstream task. For example, I could programmatically look at the output of an s(CASP) program and separate the results by their “truth shades”.
The same goes for NAF, btw. The fact that NAF operates under a Closed World Assumption (CWA) is not ever explicitly represented in a Prolog program. The programmer, or the user, must keep in mind the semantics of the formalism. I guess that explains why I often find an odd-looking remark in Prolog courses explaining that “true” and “false” are not to be taken as asbolutes and must be considered in the context of the CWA.
See my first comment in this thread about my thoughts on mixing probabilities and logic (and generally please see my comments on s(CASP) and why I am so enthusiastic about it, if you are confused by this comment). In summary there are too many frameworks that attempt to do this and now that I think about it, I even worked one out myself when I was doing my MSc (I was training some PCFGs). As far as I know there’s no accepted standard and all frameworks are deficient in one way or another. In any case I think probabilistic reasoning itself is borken: it’s an elegant theory that only works in practice for simple cases and turns into a horrible mess everywhere else. And to rant a bit about this, it seems these days, everytime I pick up a paper that does “Bayesian this and that” I find that somewhere in there somebody is trying to integrate over an infinite distribution and ends up approximating it with some data, possibly annotated by their dog for all I know. So much for the elegant mathematics of the Bayesian calculus.
Yeah, absolutely not. I’m doing none of that. That stuff doesn’t work and I don’t want it soiling my clean and tidy logical notations. Begone, probabilities! Oksapodo! Ftou! Ftou! (And other apotropaic expressions in Greek).
Think of it this way: Prolog is the syntax and semantics of logic turned into a programming language (I’m not saying “of FOL” because Prolog is not FOL and it’s not even one logic). It’s not perfect, but it’s useful and usable. But, how would it look like if someone tried to turn the Bayesian calculus into a programming language? It would look like an unholy mess, that’s what it would look like. It would be like one of those esoteric languages like Brainfuck that look like line noise, or Piet (where one programs using coloured squares like a Modrian painting). There’s a reason why this is so: the notation is too abstract and the semantics have nothing to do with the real world. The entire edifice is erected in Plato’s world of ideas, or in Aristophanes’ cloud cuckoo land, more like it.
And yet so much effort in AI has been wasted trying to do just that, use probabilities as a language to program computers to take complex decisions in the real world. That’s done under the pretext that the real world is full of uncertainties and probabilities model uncertainty. Great idea, but it doesn’t work because probabilities are a representation that humans can’t understand and that computers can’t compute: the most widely used probabilistic machine learning algorithm has got to be … Naive Bayes. A simplification, necessary to avoid the intractable calculations that one must always perform in the non-naive case. Everything in probabilistic reasoning is like that, there are intractable calculations all over the place and they crop up every time anyone tries to do anything halfway useful, let alone try to model human decision making.
And we’re still only in the theoretical sphere, where probabilities are not reified and we’re just doing pure mathematics with random variables like A, B and C. But the moment you try to assign actual, numeric values to your variables, the moment you take probabilities into the real world they transform into statistics, like Mogwai transform into Gremlins when you feed them after midnight. And then you’re back to doing dirty, unprincipled heuristics. So what was the point of the elegant and principled approach in the first place? It’s the old switcheroo, it’s a rip-off.
Statistical AI is an mess. Why would I ever want to bring that mess into my beautiful and practical predicate calculus? Never!
It’s not purism, not at all. Exactly the opposite. I’m a pragmatist who wants something that really works. Probabilities work on paper, but not anywhere else.
That’s incomprehensible. You had to draw a Venn diagram to explain it and I still don’t get it.
Not sure whether this topic needs a separate thread?
Your rant about probability theory is “probably” true (pun intended).
But I cannot response since I nowhere advocated solely probability
theory. You didn’t read carefully what I wrote:
Rational choice theory amounts to attempts to understand
better human economic and social behaviour. It complements
probability theory, in that it offers a soft science viewpoint
whereas mathematical probabiliy theory is rather hard science.
Looking also at soft science has a long tradition in artificial intelligence,
a prominent name that comes into mind is Herbert Simon.
But certain tooling and logic programming approaches might be not
that good per se in incorporating some insights from rational choice
theory. Needs also some extra effort to refine what you are doing
and not only rely on simple probabilistic approaches.
Edit 12.01.2023
For example there is an extension of ProbLog with utiltiy functions
and expected value, but mostlikely you cannot choose between risk
averse and risk taking modes of rationality. Since this would
also require the calculation of variation. But nevertheless a nice
further extension of “vanilla” ProbLog:
Indeed, I missed the bit about “rational choice”, mainly because I don’t know what that is and so I didn’t recognise it as a special term.
On the other hand the rant wasn’t aimed specifically at you so I wasn’t expecting a response. Sorry for the misunderstanding. I tend to explode in incontinent rants at the most inappropriate times.
Wow, I did not expect to partially responsible for kicking off such a discussion. To address a far earlier point, ASP is interesting to me compared to s(CASP) because it is in my personal interest to be more interested in forward inference / datalog than backward inference. I’ve been in the business of trying to find useful ways to enhance the expressivity of datalog. One such enhancement is to add egraph rewriting to datalog, for which I still do not understand how it could work in backward inference. Maybe somehow something something tabling, but it is not at all clear. ASP is very compelling as one direction of a super datalog.
Since it seems I’ve got abductive and inductive logic programming experts on the horn, is there advice for references or systems to use? In particular, I’m interested in applications to binary and imperative program analysis. Invariant inference, spec inference and other such questions.