Why do I get an "Arithmetic: evaluation error: `zero_divisor'" ? in cplint / slipcover

swipl 8.4.1, cplint 4.5.0

Hi, see the code:


:- use_module( library( slipcover)).

:- sc.

:- set_sc(verbosity,3).
% :- set_sc(depth_bound,false).
% :- set_sc(neg_ex,given).


bg([]).

fold(train,[1]).

output( p/0).

modeh(*,p).

:- begin_in.

p.

:- end_in.

begin(model(1)).
p.
end(model(1)).


/*

(ins)?- induce_par([train],P),test(P,[train],LL,AUCROC,ROC,AUCPR,PR).
Initial theory
p:1.0.

Initial score 0.000000
Restart number 1

Random_restart: Score 0.000000
/* EMBLEM Final score 0.000000
Wall time 0.000000 */
p:1.0.

Testing
ERROR: Arithmetic: evaluation error: `zero_divisor'
ERROR: In:
ERROR:   [15] _73496 is 0/(0+0)
ERROR:   [14] auc:compute_pointsroc([1.0- ...],1.0e+20,0,0,1,0,[],_73554) at /home/ox/.local/share/swi-prolog/pack/auc/prolog/auc.pl:167
ERROR:   [13] auc:compute_areas([1.0- ...],_73598,_73600,_73602,_73604) at /home/ox/.local/share/swi-prolog/pack/auc/prolog/auc.pl:39
ERROR:   [12] auc:compute_areas_diagrams([1.0- ...],_73648,_73650,_73652,_73654) at /home/ox/.local/share/swi-prolog/pack/auc/prolog/auc.pl:66
ERROR:    [9] toplevel_call(user:user: ...) at /usr/local/lib/swipl/boot/toplevel.pl:1117
ERROR: 
ERROR: Note: some frames are missing due to last-call optimization.
ERROR: Re-run your program in debug mode (:- debug.) to get more detail.


*/

Thanks in advance.

Hi Frank,
the problem is in predicate test: since there is a single positive example, when it tries to compute the ROC curve it computes a fraction where the denominator is the number of negative examples. It does not make sense to perform learning with no negative examples.

Fabrizio

1 Like

Hi Fabrizio,

today I watched your video

and I think I begin to make a little bit progress in understanding how it works. But maybe it is just hard to create example code for this topic which is minimal and in the same time has a huge expressiveness. I tend do search for such examples for learning.

The best example for beginners which I found is in my opinion the shop example :

cplint/shop.pl at b4497e8fb1bd1c03805b87db5f75682464e7de8c Β· friguzzi/cplint Β· GitHub .

But as I understood it only allows the emblem calculation (induce_par) and not the slipcover calculation (induce).

For slipcover I need then also additionally input_cw, modeh, modep and determination and optionally lookahead. But i can spare out the β€˜in’ section.

You said my example is missing a negative example. How do I express this? Do I use β€˜\+’ in predicates or do I use β€˜neg’ in models?

I have an example which seems to work but has no explicit negation in it:


:- use_module( library( slipcover)).

:- sc.

:- set_sc(verbosity,3).
% :- set_sc(depth_bound,false).
% :- set_sc(neg_ex,given).


bg([]).

fold(train,[1,2,3,4]).

output( p/0).
output( q/0).

modeh(*,p).
modeh(*,q).

:- begin_in.

p: 0.4 ; q : 0.6.

:- end_in.

begin(model(1)).
p.
end(model(1)).

begin(model(2)).
q.
end(model(2)).

begin(model(3)).
q.
end(model(3)).

begin(model(4)).
p.
end(model(4)).


/*

(ins)?- induce_par([train],P),test(P,[train],LL,AUCROC,ROC,AUCPR,PR).
Initial theory
p:0.4 ; q:0.6.

Initial score -5.708465
Restart number 1

Random_restart: Score -5.545177
/* EMBLEM Final score -5.545177
Wall time 0.000000 */
p:0.5 ; q:0.5.

Testing
P = [(p:0.5;q:0.5:-true)],
LL = -5.545177444479562,
AUCROC = AUCPR, AUCPR = 0.5,
ROC = c3{axis:_{x:_{max:1.0, min:0.0, padding:0.0, tick:_{values:[0.0, 0.1, 0.2, 0.3, 0.4|...]}}, y:_{max:1.0, min:0.0, padding:_{bottom:0.0, top:0.0}, tick:_{values:[0.0, 0.1, 0.2, 0.3, 0.4|...]}}}, data:_{rows:[x-'ROC', 0-0, 1.0-1.0], x:x}},
PR = c3{axis:_{x:_{max:1.0, min:0.0, padding:0.0, tick:_{values:[0.0, 0.1, 0.2, 0.3, 0.4|...]}}, y:_{max:1.0, min:0.0, padding:_{bottom:0.0, top:0.0}, tick:_{values:[0.0, 0.1, 0.2, 0.3, 0.4|...]}}}, data:_{rows:[x-'PR', 0.0-0.5, 0.25-0.5, 0.5-0.5, 0.75-0.5, 1-0.5], x:x}}.

(ins)?- in(P),test(P,[train],LL,AUCROC,ROC,AUCPR,PR).
Testing
P = [(p:0.4;q:0.6)],
LL = -5.708465422560582,
AUCROC = AUCPR, AUCPR = 0.5,
ROC = c3{axis:_{x:_{max:1.0, min:0.0, padding:0.0, tick:_{values:[0.0, 0.1, 0.2, 0.3, 0.4|...]}}, y:_{max:1.0, min:0.0, padding:_{bottom:0.0, top:0.0}, tick:_{values:[0.0, 0.1, 0.2, 0.3, 0.4|...]}}}, data:_{rows:[x-'ROC', 0-0, 0.5-0.5, 1.0-1.0], x:x}},
PR = c3{axis:_{x:_{max:1.0, min:0.0, padding:0.0, tick:_{values:[0.0, 0.1, 0.2, 0.3, 0.4|...]}}, y:_{max:1.0, min:0.0, padding:_{bottom:0.0, top:0.0}, tick:_{values:[0.0, 0.1, 0.2, 0.3, 0.4|...]}}}, data:_{rows:[x-'PR', 0.0-0.5, 0.25-0.5, 0.5-0.5, 0.75-0.5, 1.0-0.5], x:x}}.


*/

Do I make an indirect negation anywhere?

Thanks in advance,
Frank.

Obviously, because test_prob/6 tells me:

(ins)?- induce_par([train],P),test_prob(P,[train],NP,NN,LL,L).
Initial theory
p:0.4 ; q:0.6.

Initial score -5.708465
Restart number 1

Random_restart: Score -5.545177
/* EMBLEM Final score -5.545177
Wall time 0.001000 */
p:0.5 ; q:0.5.

Testing
P = [(p:0.5;q:0.5:-true)],
NP = NN, NN = 4,
LL = -5.545177444479562,
L = [0.5-p(1), 0.5-(\+p(2)), 0.5-(\+p(3)), 0.5-p(4), 0.5-(\+q(1)), 0.5-q(2), 0.5-q(3), 0.5-(\+ ...)].

Is that correct?

Regards.

HI Frank,
you did not need to specify negative examples because there is a setting neg_ex that controls how negative examples are generated: its default value, cw, generates negative examples automatically.
From the manual:
neg_ex (values: given, cw, default value: cw): if set to given, the negative examples in training and testing are taken from the test folds interpretations, i.e., those examples ex stored as neg(ex); if set to cw, the negative examples in training and testing are generated according to the closed world assumption, i.e., all atoms for target predicates that are not positive examples. The set of all atoms is obtained by collecting the set of constants for each type of the arguments of the target predicate, so the target predicates must have at least one fact for modeh/2 or modebb/2 also for parameter learning.

Fabrizio