Probabilistic Logic Programming

What is a good resource for Probabilistic Logic Programming in Prolog, SWI-Prolog in particular?

I found ProbLog which looks elegant but seems to be implemented on top of Python instead of one of the Prolog flavors.

3 Likes

I’d suggest to start with a look at the terrific site http://cplint.ml.unife.it/

6 Likes

You can also check out the book http://ml.unife.it/plp-book/ which is associated with the site mentioned by Jan

2 Likes

http://mcs.unife.it/~friguzzi/plptutorial/#tutorial is very nice and easy also.

1 Like

BTW, what is the speed slowdown expected by using cplint with a simple lpad (comparing it with calling a regular non-probabilistic predicate)?

Probabilistic reasoning is in general very expensive so a slowdown with respect to plain prolog is to be expected. What is the program you are trying?

I was simply experimenting with different examples from your book, so I was wondering if you had made a benchmark to measure the slowdown.

I have made no such benchmarks because it would not be very fair: plain prolog and probabilistic prolog compute different things. In fact, I don’t know of anybody that has done this comparison.
However, if you want to get an idea of the difference, you can try the computation of the existence of a path in a grah, see

http://cplint.eu/example/inference/path_tabling.swinb

for the probabilistic version.

If you consider the attached graph, cplint takes more than 1 hour to answer the query prob(path(1,100),P). If you remove the probabilities it takes about 10 ms to answer path(1,100)

(Attachment graph_01.pl is missing)

1 Like

Sorry, the attachment didn’t go through, here it is
https://drive.google.com/file/d/1x8a-wTG8jIIZRFM9F5QClZatYVySrOHe/view?usp=sharing

@friguzzi, I ran into an error trying to use cplint with SWI 8.1.4 on MacOS.
Install went without issues but library(pita) has issues.

?- pack_install(cplint).
Install cplint@4.5.0 from GIT at https://github.com/friguzzi/cplint.git Y/n? 
Warning: Package depends on the following:
Warning:   "auc", provided by auc@1.0 from https://github.com/friguzzi/auc.git
Warning:   "bddem", provided by bddem@4.3.1 from https://github.com/friguzzi/bddem.git
Warning:   "matrix", provided by matrix@1.9.1 from https://github.com/friguzzi/matrix.git

What do you wish to do
   (1) * Install proposed dependencies
   (2)   Only install requested package
   (3)   Cancel

Your choice? i auc@1.0      - Library for computing Areas Under the Receiving Operating Charactersitics and Precision Recall curves
i bddem@4.3.1               - A library for manipulating Binary Decision Diagrams
i matrix@1.0                - Operations with matrices
Package:                cplint
Title:                  A suite of programs for reasoning with probabilistic logic programs
Installed version:      4.5.0
Author:                 Fabrizio Riguzzi <fabrizio.riguzzi@unife.it>
Download URL:           https://github.com/friguzzi/cplint/releases/*.zip
Requires:               bddem, auc, matrix
Activate pack "cplint" Y/n? 
true.

?- use_module(library(pita)).
ERROR: /lib/swipl/pack/bddem/prolog/bddem.pl:33:
	/lib/swipl/pack/bddem/prolog/bddem.pl:33: Initialization goal raised exception:
	dlopen(/lib/swipl/pack/bddem/lib/x86_64-darwin/bddem.so, 1): Library not loaded: @rpath/libswipl.7.dylib
  Referenced from: /lib/swipl/pack/bddem/lib/x86_64-darwin/bddem.so
  Reason: image not found

Any advice?
Thank you!

Try with
pack_rebuild(bddem).

Yeah, rebuild helped. Thanks!

Dear Ielbert,

In addition to the resource/system pointed to you already, there are a number of
packages with a PLP flavour, depending if you are interested in reasoning,
parameter estimation or general machine learning.

pha- Probabilistic Horn Abduction
phil- Learning Hierarchical Probabilistic Logic Programs parameters with gradient
descent and Expectation Maximization
prism- Run PRISM as a child process
[Unfortunately this is just a harness to run Prism system. Pity it was named
Prism as it would be great to have Prism in SWI-Prolog.]
bims- Bayesian inference of model structure.
pepl- Parameter estimation for SLP with the Failure Adjusted Maximisation
pfd_meta- Probabilistic finite domains meta-interpreter

There is also a workshop on PLP, you might be interested to some of the
papers there:
http://stoics.org.uk/plp

Nicos Angelopoulos

2 Likes

@nicos, thanks for the pointers! Very relevant to my use-case.

Does this mean PLPs won’t scale to knowledge bases (KB) with 1M+ facts?

I’m currently trying to do parameter learning using cplint on a KB with ~900 examples/interpretations and ~12M facts as background knowledge base. The learning (i.e induce_par([train], P) ) doesn’t finish after running for more than a week. Any suggestion to apply cplint on kbs of such scale?

To scale to larger DBs you should use Liftcover instead of Slipcover.
You can find Liftcover manual here.

1 Like

Thanks.

I tried the following simple parameter learning problem with LiftCover and it returns false, whereas it works with Slipcover. The example is from the Problog doc page Any idea why?

:- use_module(library(liftcover)).

:- lift.
:- set_lift(verbosity,1).

:- begin_bg.
person(1).
person(2).
person(3).
person(4).

friend(1,2).
friend(2,1).
friend(2,4).
friend(3,2).
friend(4,2).

smokes(4).
influences(2,3).
stress(1).
:- end_bg.


:- begin_in.
stress(X):0.5 :- person(X).
influences(X,Y):0.5 :- person(X), person(Y).

smokes(X) :- stress(X).
smokes(X) :- friend(X,Y), influences(Y,X), smokes(Y).
:- end_in.

fold(train, [1, 3, 4, 5]).
fold(test, [2, 6]).

fold(all, F) :-
    fold(train, FTr),
    fold(test, FTe),
    append(FTr, FTe, F).

output(stress/1).
output(influences/2).

input(person/1).
input(friend/2).
input(smokes/1).

modeh(*, stress(-obj)).
modeh(*, influences(-obj, -obj)).
modeb(*, person(-obj)).
modeb(*, friend(-obj, -obj)).
modeb(*, smokes(-obj)).

determination(stress/1, person/1).
determination(influences/2, person/1).

begin(model(1)).
neg(smokes(2)).
end(model(1)).

begin(model(2)).
smokes(4).
end(model(2)).

begin(model(3)).
neg(influences(1, 2)).
end(model(3)).

begin(model(4)).
neg(influences(4, 2)).
end(model(4)).

begin(model(5)).
influences(2, 3).
end(model(5)).

begin(model(6)).
smokes(1).
end(model(6)).

Running it with Slipcover with induce_par([train], P), I get

P = [(smokes(_A):1.0:-stress(_A)), (smokes(_B):1.0:-friend(_B,_C),influences(_C,_B),smokes(_C)), (stress(_D):0.25;'' :0.75:-person(_D)), (influences(_E,_F):0.07692307692307693;'' :0.9230769230769231:-person(_E),person(_F))]

Running it with Liftover with induce_par_lifft([train], P), I get false.

The problem is that LIFTCOVER accepts a restricted syntax:
LIFTCOVER learns liftable probabilistic logic programs a restricted version of Logic programs with annotated disjunctions LPADs. A LPLP is a set of annotated clauses whose head contain a single atom annotated with a probability. For the rest, the usual syntax of Prolog is used.

A clause in such a program has a single head with the target predicate (the predicate you want to learn) and a body composed of input predicates or predicates defined by deterministic clauses (typically facts).

A general LPLP clause has the following form:

h:p:- b1,b2…,bn.

An example inspired from the UWCSE dataset is represented as (file uwcse.pl)

advisedby(A,B):0.3 :-student(A),professor(B),project(A,C),project(B,C). 
advisedby(A,B):0.6 :-student(A),professor(B),ta(C,A),taughtby(C,B). 

student(harry). professor(ben). project(harry,pr1). project(harry,pr2). project(ben,pr1). project(ben,pr2). taughtby(c1,ben). taughtby(c2,ben). ta(c_1,harry). ta(c_2,harry).

For a slightly more expressive language, you can look at PHIL.

Thanks, I’ll check out LIFTCOVER and LPLPs.

It looks like SLIPCOVER, by extension (I assume) LIFTCOVER, first learns a structure of the space of clauses, defined by modeh and modeb declarations, using the examples. It then uses EMBLEM to perform parameter learning.

Is it possible to skip the structure learning and just apply EMBLEM for parameter learning. If I remove all the mode declarations, it raises an error. How can I skip structure learning and just do param learning with EMBLEM for a list of target predicates specified via output/1? I couldn’t find anything on just applying EMBLEM.