Inconsistent tabling behavior across linux/mac

Hello Everyone, I am experiencing a very weird issue with my code. In short, I am writing a parser for an indentation sensitive language that also has left-recursive rules. I came across the PackRat parsing paper Paper Link and figured it would be a great strategy to tackle the problem.

To my surprise, the original version I wrote at home on my ubuntu machine with SWIPL 9.1.3 (compiled locally) manages to parse the file as expected. However, when I run it on my Mac work machine with SWIPL 9.2.2 from brew, I get an unexpected failure. To further complicate matters, when I run the code on a docker container (running on the mac) with the official SWI image w/ version 9.2.2 I also get the (Correct) expected behavior.

The code in question (The complete repository is available Here):

N.B. To simplify analysis I have included two predicates test_tokens/1 and test_parse/1 that avoid having to run the lexer to examine the issue and allows for testing with this file as a standalone file. You can run test_parse/1 to observe the described behavior.

:- module(uvl, [parse/3]).
:- set_prolog_flag(double_quotes, string).
:- set_prolog_flag(encoding, utf8).
:- encoding(utf8).
:- use_module(library(dcg/basics)).
:- use_module(library(dcg/high_order)).
:- use_module(lexer).
%:- use_module(library(tabling)).
:- use_module(library(clpfd)).

%%Main entrypoint for the parser for UVL files
%%Is conformant to the grammar as defined in the uvl-parser
%%repository
parse(File, AST, Stop) :-
    lex_uvl(Tokens, File), !,
    phrase(uvl(AST), Tokens, Stop).

test_tokens(
    [
        features,indent,id_strict("Sandwich"),indent,mandatory,indent,id_strict("Bread"),dedent,
        optional,indent,id_strict("Sauce"),indent,alternative,indent,id_strict("Ketchup"),
        id_strict("Mustard"),dedent,dedent,id_strict("Cheese"),dedent,dedent,dedent,constraints,
        indent,id_strict("Ketchup"),impl,id_strict("Cheese"),dedent
    ]
).

test_parse(AST) :-
    test_tokens(Ts),
    phrase(uvl(AST), Ts).


uvl(ast(H,F,C)) -->
    header(H),
    optional(features(F),{F = nil}),
    optional(constraints(C),{C = nil}).

header(header(namespace(N),includes(In),imports(Im))) -->
    optional(namespace(N),{N = []}),
    optional(includes(In),{In = []}),
    optional(imports(Im),{Im = []}).

namespace(N) --> [namespace], reference(N), !.
reference(N) --> ([id_strict(N)], !) | ([id_not_strict(N)], !).
fully_qualified_reference(FQN) --> sequence(reference, [dot], FQN), {length(FQN, Len), Len #> 0}.

includes(Is) --> [include, indent], includes_(Is), [dedent].
includes_([I|Is]) --> language_level(I), !, includes_(Is).
includes_([]) --> [].

language_level(lang_level(Major,Minor)) -->
    major_level(Major), optional(minor_level(Minor), {Minor = []}).

major_level(Major) -->
    ([boolean_t], {Major = boolean_level}, ! ) |
    ([arithmetic_t], {Major = arithmetic_level}, ! ) |
    ([type_t], {Major = type_level}, ! ).
minor_level(Minor) -->
    [dot], !, (
        ([group_card], {Minor = group_card}, !) |
        ([feature_card], {Minor = feature_card}, !) |
        ([agg_fun], {Minor = agg_fun}, !) |
        ([string_constraints], {Minor = string_constraints}, !) |
        ([mul], {Minor = minor_all}, !)
    ).

imports(Is) --> [imports, indent], imports_(Is), [dedent].
imports_([import(NS,Alias)|Is]) -->
    reference(NS),
    optional(([as], reference(Alias)), {Alias = []}), !,
    imports_(Is).
imports_([]) --> [].

features(Fs) --> [features, indent], features_(Fs), [dedent].
features_(
    [feature(
        name(N), type(FeatureType), cardinality(Card),
        attributes(Attrs), group(Gs)
    )|Fs]
) -->
    optional(feature_type(FeatureType), {FeatureType = boolean}),
    fully_qualified_reference(N),
    optional(feature_cardinality(Card), {Card = nil}),
    optional(attributes(Attrs), {Attrs = nil}),
    optional(([indent], groups(Gs), [dedent]), {Gs = nil}), !,
    features_(Fs).
features_([]) --> [].

feature_type(boolean) --> [boolean_t], !.
feature_type(integer) --> [integer_t], !.
feature_type(string) --> [string_t], !.
feature_type(real) --> [real_t], !.

feature_cardinality(card(From,To)) -->
    [cardinality, lbracket, integer(From)],
    optional(([range], ([integer(To)] | ([mul], {To = inf}))), {To = nil}),
    [rbracket].
cardinality(card(From,To)) -->
    [lbracket, integer(From)],
    optional(([range], ([integer(To)] | ([mul], {To = inf}))), {To = nil}),
    [rbracket].

attributes(Attrs) --> [lbrace], attributes_(Attrs), [rbrace].
attributes_([A|As]) -->
    (value_attr(A) | constraint_attr(A)), [comma], !, attributes_(As).
attributes_([A]) --> (value_attr(A) | constraint_attr(A)), !.
attributes_([]) --> [], !.

value_attr(attr(key(K),val(V))) -->
    id(K), optional(value(V), {V = nil}).

value(bool(B)) --> [boolean(B)], !.
value(integer(I)) --> [integer(I)], !.
value(float(F)) --> [float(F)], !.
value(string(S)) --> [normal_string(S)], !.
value(attributes(Attrs)) --> attributes(Attrs), !.
value(vector(V)) --> [lbracket], vector(V), [rbracket], !.

vector([V|Vs]) --> value(V), [comma], !, vector(Vs).
vector([V]) --> value(V), !.
vector([]) --> [], !.

groups([G|Gs]) --> group(G), !, groups(Gs).
groups([]) --> [].

group(or_group(S)) -->  [or_group], group_spec(S), !.
group(alternative(S)) --> [alternative], group_spec(S), !.
group(optional(S)) --> [optional], group_spec(S), !.
group(mandatory(S)) --> [mandatory], group_spec(S), !.
group(cardinality(Card,S)) --> cardinality(Card), group_spec(S), !.

group_spec(Fs) --> [indent], features_(Fs), {length(Fs,L), L > 0}, [dedent].

constraint_attr(C) --> [constraint], constraint(C), !.
constraint_attr(Cs) --> [constraints], constraint_list(Cs), !.

:- table constraint/3.

constraint(equation(E)) --> equation(E).
constraint(paren_constraint(C)) --> [lparen], constraint(C), [rparen].
constraint(not_constraint(C)) --> [not], constraint(C).
constraint(and_constraint(C1,C2)) --> constraint(C1), [and], constraint(C2).
constraint(or_constraint(C1,C2)) --> constraint(C1), [or], constraint(C2).
constraint(impl_constraint(C1,C2)) --> constraint(C1), [impl], constraint(C2).
constraint(eq_constraint(C1,C2)) --> constraint(C1), [equivalence], constraint(C2).
constraint(literal_constraint(C)) --> fully_qualified_reference(C).

constraint_list(Cs) --> [lbracket], constraint_list_(Cs), [rbracket].
constraint_list_([C|Cs]) --> constraint(C), [comma], !, constraint_list_(Cs).
constraint_list_([C]) --> constraint(C), !.
constraint_list_([]) --> [], !.

constraints(Cs) --> [constraints, indent], constraints_(Cs), [dedent].
constraints_([C|Cs]) --> constraint(C), !, constraints_(Cs).
constraints_([]) --> [].

:- table equation/3.

equation(equal(E1,E2)) --> expression(E1), [equal], expression(E2).
equation(lt(E1,E2)) --> expression(E1), [lt], expression(E2).
equation(gt(E1,E2)) --> expression(E1), [gt], expression(E2).
equation(lte(E1,E2)) --> expression(E1), [lte], expression(E2).
equation(gte(E1,E2)) --> expression(E1), [gte], expression(E2).
equation(neq(E1,E2)) --> expression(E1), [neq], expression(E2).

:- table expression/3.

expression(E) --> ([float(E)] | [integer(E)] | [string(E)]).
expression(E) --> aggregate_function(E).
expression(E) --> fully_qualified_reference(E).
expression(sub_expression(E)) --> [lparen], expression(E), [rparen].
expression(add(E1,E2)) --> expression(E1), [add], expression(E2).
expression(sub(E1,E2)) --> expression(E1), [sub], expression(E2).
expression(mul(E1,E2)) --> expression(E1), [mul], expression(E2).
expression(div(E1,E2)) --> expression(E1), [div], expression(E2).

aggregate_function(sum(ref(R),ref_op(ROP))) -->
    [sum, lparen], optional((fully_qualified_reference(ROP),[comma]), {ROP = nil}),
    fully_qualified_reference(R), [rparen], !.
aggregate_function(avg(ref(R),ref_op(ROP))) -->
    [avg, lparen], optional((fully_qualified_reference(ROP),[comma]), {ROP = nil}),
    fully_qualified_reference(R), [rparen], !.
aggregate_function(len(ref(R))) -->
    [len, lparen], fully_qualified_reference(R), [rparen], !.
aggregate_function(floor(ref(R))) -->
    [floor, lparen], fully_qualified_reference(R), [rparen], !.
aggregate_function(ceil(ref(R))) -->
    [ceil, lparen], fully_qualified_reference(R), [rparen], !.

id(ID) --> [id_strict(ID)] | [id_not_strict(ID)].

Checking the output of the parse on the given example, I observe that the linux output is the following (as expected):

A = ast(header(namespace([]),includes([]),imports([])),
    [feature(
        name(["Sandwich"]),
        type(boolean),
        cardinality(nil),
        attributes(nil),
        group([
            mandatory(
                [ feature(name(["Bread"]),
                    type(boolean),
                    cardinality(nil),
                    attributes(nil),
                    group(nil))
                ]),
            optional(
                [ feature(name(["Sauce"]),
                    type(boolean),
                    cardinality(nil),
                    attributes(nil),
                    group([
                        alternative(
                            [ feature(name(["Ketchup"]),
                                type(boolean),
                                cardinality(nil),
                                attributes(nil),
                                group(nil)),
                              feature(name(["Mustard"]),
                                type(boolean),
                                cardinality(nil),
                                attributes(nil),
                                group(nil))
                            ])
                        ])),
                  feature(name(["Cheese"]),
                    type(boolean),
                    cardinality(nil),
                    attributes(nil),
                    group(nil))
                ])
        ]))
    ],
    [impl_constraint(literal_constraint(["Ketchup"]),literal_constraint(["Cheese"]))])

Meanwhile, when I examine the output with phrase/3 to see the rest, I obtain the following partial parse on the mac side:

A = ast(header(namespace([]),includes([]),imports([])),
    [ feature(name(["Sandwich"]),
	      type(boolean),
	      cardinality(nil),
	      attributes(nil),
	      group([ mandatory(
			  [ feature(name(["Bread"]),
				    type(boolean),
				    cardinality(nil),
				    attributes(nil),
				    group(nil))
			  ]),
		      optional(
			  [ feature(name(["Sauce"]),
				    type(boolean),
				    cardinality(nil),
				    attributes(nil),
				    group([ alternative(
						[ feature(name(["Ketchup"]),
							  type(boolean),
							  cardinality(nil),
							  attributes(nil),
							  group(nil)),
						  feature(name(["Mustard"]),
							  type(boolean),
							  cardinality(nil),
							  attributes(nil),
							  group(nil))
						])
					  ])),
			    feature(name(["Cheese"]),
				    type(boolean),
				    cardinality(nil),
				    attributes(nil),
				    group(nil))
			  ])
		    ]))
    ],
    nil)
Rest = [constraints,indent,id_strict("Ketchup"),impl,id_strict("Cheese"),dedent] 

Steps I’ve taken to try to solve the issue

I observed with the debugger and with phrase/3, that the issue seems to stem from the constraint//1 rule above. To examine this further, I isolated the code into the following and created an even smaller test case available in the example/1 predicate.

:- module(packrat, [constraints/3]).
:- use_module(library(dcg/basics)).
:- use_module(library(dcg/high_order)).
:- use_module(library(tabling)).

:- table constraint/3.

constraint(equation(E)) --> equation(E).
constraint(paren_constraint(C)) --> [lparen], constraint(C), [rparen].
constraint(not_constraint(C)) --> [not], constraint(C).
constraint(and_constraint(C1,C2)) --> constraint(C1), [and], constraint(C2).
constraint(or_constraint(C1,C2)) --> constraint(C1), [or], constraint(C2).
constraint(impl_constraint(C1,C2)) --> constraint(C1), [impl], constraint(C2).
constraint(eq_constraint(C1,C2)) --> constraint(C1), [equivalence], constraint(C2).
constraint(literal_constraint(C)) --> fully_qualified_reference(C).

constraint_list(Cs) --> [lbracket], constraint_list_(Cs), [rbracket].
constraint_list_([C|Cs]) --> constraint(C), [comma], !, constraint_list_(Cs).
constraint_list_([C]) --> constraint(C), !.
constraint_list_([]) --> [], !.

constraints(Cs) --> [constraints, indent], constraints_(Cs), [dedent].
constraints_([C|Cs]) --> constraint(C), !, constraints_(Cs).
constraints_([]) --> [].

:- table equation/3.

equation(equal(E1,E2)) --> expression(E1), [equal], expression(E2).
equation(lt(E1,E2)) --> expression(E1), [lt], expression(E2).
equation(gt(E1,E2)) --> expression(E1), [gt], expression(E2).
equation(lte(E1,E2)) --> expression(E1), [lte], expression(E2).
equation(gte(E1,E2)) --> expression(E1), [gte], expression(E2).
equation(neq(E1,E2)) --> expression(E1), [neq], expression(E2).

:- table expression/3.

expression(E) --> ([float(E)] | [integer(E)] | [string(E)]).
expression(E) --> aggregate_function(E).
expression(E) --> fully_qualified_reference(E).
expression(sub_expression(E)) --> [lparen], expression(E), [rparen].
expression(add(E1,E2)) --> expression(E1), [add], expression(E2).
expression(sub(E1,E2)) --> expression(E1), [sub], expression(E2).
expression(mul(E1,E2)) --> expression(E1), [mul], expression(E2).
expression(div(E1,E2)) --> expression(E1), [div], expression(E2).

aggregate_function(sum(ref(R),ref_op(ROP))) -->
    [sum, lparen], optional((fully_qualified_reference(ROP),[comma]), {ROP = nil}),
    fully_qualified_reference(R), [rparen], !.
aggregate_function(avg(ref(R),ref_op(ROP))) -->
    [avg, lparen], optional((fully_qualified_reference(ROP),[comma]), {ROP = nil}),
    fully_qualified_reference(R), [rparen], !.
aggregate_function(len(ref(R))) -->
    [len, lparen], fully_qualified_reference(R), [rparen], !.
aggregate_function(floor(ref(R))) -->
    [floor, lparen], fully_qualified_reference(R), [rparen], !.
aggregate_function(ceil(ref(R))) -->
    [ceil, lparen], fully_qualified_reference(R), [rparen], !.

id(ID) --> [id_strict(ID)] | [id_not_strict(ID)].

fully_qualified_reference(R) --> id(R).

example([constraints, indent, id_strict(abc), impl, id_strict(bcd), dedent]).

and ran the following query:

?- example(L), phrase(constraints(C),L).
L = [constraints, indent, id_strict(abc), impl, id_strict(bcd), dedent],
C = [impl_constraint(literal_constraint(abc), literal_constraint(bcd))].

And, to my absolute surprise, this does indeed work BOTH on linux and mac, which makes this issue even more surprising.

I am at a complete loss and haven’t the slightest idea why this may be happening and would appreciate any and all help.

Thank you all for your time.

Sincerely,
K

Before really digging into this, there are some things to note.

  • Results from tabled predicates are unique (as by variant), and are returned in inconsistent ordering. If you run exactly the same thing in the same situation you may get reproducing results as the order is based on the order of hashes.
  • Using cuts in tabled programs is dangerous. You end up with incomplete tables. Only “green” cuts, i.e., those that do not affect semantics, are fine. They are typically also not needed as the tabling engine will backtrack and often kill the choicepoint quickly with limited consequences.

So, my first step would be to remove all cuts and see what happens.

Hello! Thank you for your prompt answer and great insight! I removed all cuts from the program and the parse worked correctly.

Given your answer, and the fact that this now works again (under mac), I am under the impression that many (or most) of the cuts I had included in the grammar were unnecessary. Is it reasonable to avoid cutting in grammars in general or are there specific patters in grammars that may benefit from these cuts?

That is a hard question. Many, especially non-tabled, grammars require cuts to avoid endless backtracking. In the tabled world the backtracking is scheduled differently and re-execution often just looks up the result in the tables.

I must admit that my expertise on tabled grammars is not much. There is probably a lot more to say about this.

2 Likes

Tabling often avoids infinite left-recursion (which isn’t backtracking, unless you’re generating inputs from a parse tree), but that doesn’t avoid the need for cuts to avoid infinite backtracking, I think.

PackRat parsing (or PEG) adopts a “first match” strategy for rules, which is a kind of cut, if I understand the papers I’ve read correctly (I still have some on my “tsundoku” pile).

Prolog DCG Primer points to DCGs + Memoing = Packrat Parsing But is it worth it?
ISTR something by Markus Triska about passing around an extra argument to avoid left-recursion, but I can’t find it (maybe in one of his videos?).

Just confirming that PEG only supports committed choice - normally uses the ‘/’ for alternative rather than ‘|’ to emphasize the difference. So no backtracking issues that I’m aware of.

When I wrote some recursive-decent parsers, I usually ran the grammar through an LARL(k) parser generator, to ensure that it was unambiguous. I wonder what people do nowadays with PEGs? - by definition, they’re unambiguous, I think (because of the “committed choices”); but that doesn’t feel like it properly solves the problem of determining whether the grammar is “correct” (whatever “correct” means in this context).

Perhaps I’m missing your point, but I would think correct would mean that the parser implementing the grammar does what the rules of the grammar dictate. In the case of a parser generator, you’re depending on it to be correct. But there are other models, e.g., when using DCG’s or regular expressions. At the end of the day, isn’t the parser tested for correctness like any other program?

I worry about the grammar being ambiguous. I’m not aware of tools for checking a grammar, other than parser generators – I don’t care about the generated parser; I just want the error messages from it.

Maybe I shouldn’t worry about ambiguous grammars – but an ambiguous grammar (which is “fixed” by things such as PEG’s “first match” rule) feels like a Prolog program that backtracks when I don’t expect it to.

It’s been a long time since I’ve written a full grammar, so I’ve forgotten what the tricky parts were. My recollection is that things such as the parentheses in C if-statements – that is, writing

if (x==1) do_something();

instead of

if x==1 do_something();

or the colons in Python if-statements

if x==1: 
  do_something()

are due to the following BNF grammar rules being ambiguous:

stmt ::= "if" expr stmt ";"
stmt ::= "{" stmt_block "}"
stmt ::= expr ";"
stmt_block ::= ε
stmt_block ::= stmt_block ";" stmt

It’s possible that the ambiguities that the parser generators complained about were mostly due to insufficient look-ahead, so in practice they don’t matter. But, for example, this is ambiguous:

expr ::= expr "+" expr
expr ::= expr "-" expr
expr ::= variable
expr ::= number

Then I would think that you would choose a PEG. From Parsing expression grammar - Wikipedia :

Unlike CFGs, PEGs cannot be ambiguous; a string has exactly one valid parse tree or none.

So, in Prolog terminology, semi-deterministic.

A quote from the abstract of Ford’s original paper (https://bford.info/pub/lang/peg.pdf):

For decades we have been using Chomsky’s generative system of grammars, particularly context-free grammars (CFGs) and regular expressions (REs), to express the syntax of programming languages and protocols. The power of generative grammars to express ambiguity is crucial to their original purpose of modelling natural languages, but this very power makes it unnecessarily difficult both to express and to parse machine-oriented languages using CFGs. Parsing Expression Grammars (PEGs) provide an alternative, recognition-based formal foundation for describing machine-oriented syntax, which solves the ambiguity problem by not introducing ambiguity in the first place. Where CFGs express nondeterministic choice between alternatives, PEGs instead use prioritized choice. PEGs address frequently felt expressiveness limitations of CFGs and REs, simplifying syntax definitions and making it unnecessary to separate their lexical and hierarchical components. A linear-time parser can be built for any PEG, avoiding both the complexity and fickleness of LR parsers and the inefficiency of generalized CFG parsing.

If you’re interested in PEG implementations I’ll put in a plug for ‘pPEG’ (GitHub - pcanz/pPEG: A portable PEG grammar parser… , courtesy Peter Cashin). There are a number of implementations in different programming languages and I did one for SWIP (pack(pPEG)).

Perhaps I’ve been corrupted by CFGs, but I want to know if I’ve written

expr ::= expr "-" expr

that can result in two different parse trees for 1-2-3.

I’ll look into it (I still have Guido van Rossum’s article on PEGs in my “tsundoku” pile).
“Peter Cashin” – that’s a name I haven’t heard in a long time.

You couldn’t express this in PEG because it’s left recursive. Again from Ford’s paper:

Both left and right recursion are permissible in CFGs, but as with top-down parsing in general, left recursion is unavailable in PEGs because it represents a degenerate loop. … Since both left and right recursion in a CFG merely represent repetition, however, and repetition is easier to express in a PEG using repetition operators, this limitation is not a serious problem in practice.

My grammar skills are a bit rusty, but in PEG you would probably write something like:

expr = term ('-' expr)*
term = [0-9]+                 # for example

resulting in a parse tree:

?- peg_parse({|pPEG||expr = term ('-' expr)* term = [0-9]+|},"1-2-3",Tree).
Tree = expr([term("1"), expr([term("2"), term("3")])]).

Just to summarize PEG:

  1. Alternatives are committed choice - first success wins
  2. Repetition operators (*,+,?) are greedy.
  3. No left recursion (top down parsing).

I think this adds up to deterministic parsing, i.e., you can’t write an ambiguous grammar with PEG. If you want such a thing, you’ll need a different grammar model.

expr = term ('-' expr)*

For that example, the PEG way (ie. the top-down way) is natural and beautiful. But what about this:

label ::= ident
labelled ::= var
labelled ::= labelled "." label

You can rewrite it the same way, but it goes against the grain of the intended semantics, and so it will make things like keeping track of attributes awkward.

This would seem to be the natural way of writing labelled:

labelled ::= var ('.' label)*

But it produces a right-recursive parse tree, as can be seen from the following - @ridgeworks: how would I distinguish between “a+b+c” and “a+(b+c)”?

?- peg_parse({|pPEG||expr = term ('+' expr)* term = '(' expr ')' / [a-z]+|},
              "a+b+c",Tree), print_term(Tree, [right_margin(10)]).
expr([ term("a"),
       expr([ term("b"),
              term("c")
            ])
     ])

?- peg_parse({|pPEG||expr = term ('+' expr)* term = '(' expr ')' / [a-z]+|},
             "a+(b+c)",Tree), print_term(Tree, [right_margin(10)]).
expr([ term("a"),
       expr([ term("b"),
              term("c")
            ])
     ])

?- peg_parse({|pPEG||expr = term ('+' expr)* term = '(' expr ')' / [a-z]+|},
              "(a+b)+c",Tree), print_term(Tree, [right_margin(10)]).
expr([ expr([ term("a"),
              term("b")
            ]),
       term("c")
     ])

The form of a pPEG syntax tree is derived from the grammar, and some simplification rules are applied (details omitted). If you needed to distinguish between the two cases, one way is to use a “component rule” whose name starts with a capital letter along with an explicit ‘name’ rule:

?- peg_parse({|pPEG||expr = Term ('+' expr)* Term = '(' expr ')' / name name = [a-z]+|},
              "a+b+c",Tree), ptree_printstring(Tree,PTree),writeln(PTree).
expr
├─Term
│ └─name "a"
└─expr
  ├─Term
  │ └─name "b"
  └─Term
    └─name "c"

?- peg_parse({|pPEG||expr = Term ('+' expr)* Term = '(' expr ')' / name name = [a-z]+|},
              "(a+b)+c",Tree), ptree_printstring(Tree,PTree),writeln(PTree).
expr
├─Term
│ └─expr
│   ├─Term
│   │ └─name "a"
│   └─Term
│     └─name "b"
└─Term
  └─name "c"

?- peg_parse({|pPEG||expr = Term ('+' expr)* Term = '(' expr ')' / name name = [a-z]+|},
              "a+(b+c)",Tree), ptree_printstring(Tree,PTree),writeln(PTree).
expr
├─Term
│ └─name "a"
└─Term
  └─expr
    ├─Term
    │ └─name "b"
    └─Term
      └─name "c"

Another way is to label the parenthesis literals:

?- peg_parse({|pPEG||expr = term ('+' expr)* term = pbeg expr pend / [a-z]+ pbeg = '(' pend = ')'|},
              "a+b+c",Tree), ptree_printstring(Tree,PTree),writeln(PTree).
expr
├─term "a"
└─expr
  ├─term "b"
  └─term "c"

?- peg_parse({|pPEG||expr = term ('+' expr)* term = pbeg expr pend / [a-z]+ pbeg = '(' pend = ')'|},
              "(a+b)+c",Tree), ptree_printstring(Tree,PTree),writeln(PTree).
expr
├─term
│ ├─pbeg "("
│ ├─expr
│ │ ├─term "a"
│ │ └─term "b"
│ └─pend ")"
└─term "c"

?- peg_parse({|pPEG||expr = term ('+' expr)* term = pbeg expr pend / [a-z]+ pbeg = '(' pend = ')'|},
              "a+(b+c)",Tree), ptree_printstring(Tree,PTree),writeln(PTree).
expr
├─term "a"
└─term
  ├─pbeg "("
  ├─expr
  │ ├─term "b"
  │ └─term "c"
  └─pend ")"

I expect which option you choose would depend on whether it made the subsequent semantic analysis easier or harder.