Hi everybody
I am experimenting with modelling theories/knowledge by using clpqr, probability and meta-interpreters. So far i am finding the combination very powerful and interesting. Some of the cool things that can be done is distinguish between situations in which (1) a program entails that P is false, (2) does not entail P or (3) entails that P is unknown. Also the possibility of reasoning bidirectionally with numerical quantities and computing solutions even when some values are unknown (courtesy of clpqr) is very useful.
Below is a short hello world example. It uses Theory Toolbox 2 which contains some meta interpreters. Some more (realistic) examples of such programs can be found here. Note theory toolbox does term and goal expansion on :-
and ,
so this program is needed to run the example.
Any questions and comments are welcome!
HELLO WORLD EXAMPLE:
% SETUP----------------------------------------------------------------------------------------------------------------------------------------------
:-include('theoryToolbox2.pl').
:- style_check(-singleton).
:- style_check(-discontiguous).
% HELLO WORLD EXAMPLE -------------------------------------------------------------------------------------------------------------------------------
% The equations relate the probabilities of being human, an animal or a thing to various properties. In q1 note that we obtain a probability value
% for each category (human, animal and thing) even if the truth or falsity of some premises is unknown (e.g. whether a cactus breathes).
% A probability of 0 in a conclusion (e.g. that a cactus is human) represents false, which the program can entail. The query q2 represents a
% scenario in which a program entails that some of the probabilities of belonging to a category are unknown.
human(E, X1) ⇐ moves(E, X2) ∧ breathes(E, X3) ∧ speaks(E, X4) ∧ {X1 = X2 * X3 * X4}.
animal(E, X1) ⇐ moves(E, X2) ∧ breathes(E, X3) ∧ speaks(E, X4) ∧ {X1 = X2 * X3 * (1 - X4)}.
thing(E, X1) ⇐ human(E, X2) ∧ animal(E, X3) ∧ {X1 = (1 - X2) * (1 - X3)}.
q1 ⇐ GOAL = (human(E, X1), animal(E, X2), thing(E, X3)),
INPUT = [moves(cactus, 0), breathes(cactus, X3), speaks(cactus, X4)],
prove(GOAL, INPUT, RESULT),
showProof(RESULT, color).
q2 ⇐ GOAL = (human(E, X1), animal(E, X2), thing(E, X3)),
INPUT = [moves(mushroom, X4), breathes(mushroom, X5), speaks(mushroom, 0)],
prove(GOAL, INPUT, RESULT),
showProof(RESULT, color).
OUTPUT FROM q1: