ACL-OCL / Base_JSON /prefixI /json /iwpt /1991.iwpt-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "1991",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:37:01.621450Z"
},
"title": "PROBABILISTIC LR PARSING FOR GENERAL CONTEXT-FREE GRAMMARS*",
"authors": [
{
"first": "See-Kiong",
"middle": [],
"last": "Ng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "U.S.A"
}
},
"email": ""
},
{
"first": "Masaru",
"middle": [],
"last": "Tomita",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "U.S.A"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "To combine the advantages of probabilistic gram mars and generalized LR parsing, an algorithm for constructing a probabilistic LR parser given a prob abilistic context-free grammar is needed. In this pa per, implementation issues in adapting Tomita's gen eralized LR parser with graph-structured stack to per form probabilistic parsing are discussed. Wrig__ ht_ and-_ Wrigley (1989) has proposed a probabilistic L\ufffdle construction algorithm for non-left-recursive contextlree grammars. To account for left recursions, a method for comput1ng item probabilities using the '_generation o sy m-s-nftirre'ar equa ions 1s presen ea: The notion of e erre pro a 1ties is proposed as a means for dealing with similar item sets with differing probability assignments.",
"pdf_parse": {
"paper_id": "1991",
"_pdf_hash": "",
"abstract": [
{
"text": "To combine the advantages of probabilistic gram mars and generalized LR parsing, an algorithm for constructing a probabilistic LR parser given a prob abilistic context-free grammar is needed. In this pa per, implementation issues in adapting Tomita's gen eralized LR parser with graph-structured stack to per form probabilistic parsing are discussed. Wrig__ ht_ and-_ Wrigley (1989) has proposed a probabilistic L\ufffdle construction algorithm for non-left-recursive contextlree grammars. To account for left recursions, a method for comput1ng item probabilities using the '_generation o sy m-s-nftirre'ar equa ions 1s presen ea: The notion of e erre pro a 1ties is proposed as a means for dealing with similar item sets with differing probability assignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Probabilistic grammars provide a formalism which accounts for certain statistical aspects of the lan guage, allows stochastic disambiguation of sen tences, and helps in the efficiency of the syntactic analysis. Generalized LR parsing is a highly effi cient parsing algorithm that has been adapted to handle arbitrary context-free grammars. To com bine the advantages of both mechanisms, an algo rithm for constructing a generalized probabilistic LR parser given a probabilistic context-free gram mar is needed. In Wright and Wrigley (1989) , a probabilistic LR-table construction method has been proposed for non-left-recursive context-free grammars. However, in practice, left-recursive context-free grammars are not uncommon, and it is often necessary to retain this left-recursive grammar structure. Thus, a method for handling left-recursions is needed in order to attain proba bilistic LR-table construction for general context free grammars.",
"cite_spans": [
{
"start": 514,
"end": 539,
"text": "Wright and Wrigley (1989)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper , we concentrate on incorporat ing probabilistic grammars with generalized LR parsing for efficiency. Stochastic information from probabilistic grammar can be used in making sta tistical decision during runtime to improve per formance. In Section 3, we show how to adapt Tomita's(1985 Tomita's( , 1987 generalized LR parser with *This research was supported in part by National Science Foundation under contract IRI-8858085.",
"cite_spans": [
{
"start": 285,
"end": 298,
"text": "Tomita's(1985",
"ref_id": "BIBREF6"
},
{
"start": 299,
"end": 315,
"text": "Tomita's( , 1987",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "graph-structured stack to perform probabilistic parsing and discuss related implementation issues. In Section 4, we describe the difficulty in comput ing item probabilities for left recursive context free grammars. A solution is proposed in Sec tion 5, which involves encoding item dependencies in terms of a system of linear equations. These equations can then be solved by Gaussian Elim ination (Strang 1980) to give the item probabili ties, from which the stochastic factors of the cor responding parse actions can be computed as de scribed in Wright and Wrigley (1989) .",
"cite_spans": [
{
"start": 397,
"end": 410,
"text": "(Strang 1980)",
"ref_id": "BIBREF4"
},
{
"start": 547,
"end": 572,
"text": "Wright and Wrigley (1989)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "154",
"sec_num": null
},
{
"text": "We also introduce the notion of deferred prob ability in Section 6 in order to prevent creating excessive number of duplicate items which a.re sim ilar except for their probability assignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "154",
"sec_num": null
},
{
"text": "Probabilistic LR parsing is based on the notions of probabilistic context-free grammar and prob abilistic LR parsing table, which are both aug mented versions of their nonprobabilistic counter parts. In this section, we provide the definitions for the probabilistic versions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "A probabilistic context-free grammar (PCFG) (Suppes 1970 , We therall 1980 , Wright and Wrigley 1989 ) G, is a 4-tuple (N, T, R, S) where N is a set of non-terminal symbols including S the start symbol, T a set of terminal symbols, and R a set of probabilistic productions of the form <A--+ a, p > where A E N, a E (N U T) * , an d p the production probability. The probability p is the conditional probability P(alA) , which is the probability that the non-terminal A which appears during a derivation process is rewritten by the se quence a. Clearly if there are k A-productions with probabilities P i, ... ,Pk, then I::= l Pi = 1, since the symbol A must be rewritten by the right hand side of some A-production. The production probabilities can be estimated from the corpus as outlined in Fu and Booth(1975) or Fuj isaki(1984) .",
"cite_spans": [
{
"start": 44,
"end": 56,
"text": "(Suppes 1970",
"ref_id": "BIBREF5"
},
{
"start": 57,
"end": 74,
"text": ", We therall 1980",
"ref_id": null
},
{
"start": 75,
"end": 100,
"text": ", Wright and Wrigley 1989",
"ref_id": "BIBREF10"
},
{
"start": 119,
"end": 131,
"text": "(N, T, R, S)",
"ref_id": null
},
{
"start": 327,
"end": 331,
"text": "an d",
"ref_id": null
},
{
"start": 793,
"end": 799,
"text": "Fu and",
"ref_id": "BIBREF2"
},
{
"start": 800,
"end": 830,
"text": "Booth(1975) or Fuj isaki(1984)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic CFG",
"sec_num": "2.1"
},
{
"text": "It is assumed that the steps of every derivation in the PCFG are mutually independent, meaning that the probability of applying a rewrite rule de- (1) S-+ NP VP ::!.",
"cite_spans": [
{
"start": 151,
"end": 160,
"text": "S-+ NP VP",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic CFG",
"sec_num": "2.1"
},
{
"text": "Figure 1: GRAl: A Non-left Recursive PCFG (l) S-+NP VP l (2) NP -+ n i (3) NP -+ det n 3 ( 4) VP -+ v NP 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic CFG",
"sec_num": "2.1"
},
{
"text": "(2) s-+ s pp",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1",
"sec_num": null
},
{
"text": "1 (3) NP -+ n \ufffd (4) NP -+ det n f (5) NP -+ NP PP 10 (6) PP -+ prep NP 1 (7) VP -+ V NP 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1",
"sec_num": null
},
{
"text": "pends only upon the presence of a given nonter minal symbol ( the premis) in a derivation and not upon how the premis was generated. Thus, the probability of a derivation is simply the product of the production probabilities of the productions in the derivation sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1",
"sec_num": null
},
{
"text": "Figures 1, 2 and 3 show three example PCFGs GRAl, GRA2 and GRA3 respectively. Inci dentally, GRAl is non-left recursive, GRA2 and GRA3 a.re both left-recursive, although GRA3 is \"more\" left-recursive than GRA2. GRA2 1 A Os a derivation cycle in which the first and last p1\ufffdions used in the derivation sequence are the same and occur now here else in the sequence.",
"cite_spans": [
{
"start": 211,
"end": 215,
"text": "GRA2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "1",
"sec_num": null
},
{
"text": "(1) Table A probabilistic LR table is an augmented LR table of which the entries in the ACTION-table contains an additional field which is the Pi robability of the action. We call this probability {ikchastic Ja cwrj because it is the factor used in the computation (multiplication) of the runtime stochastic prod uct. The parser keeps this stochastic product during runtime for each ossible derivatio ectin t eir respective likelihoods. his product can be \ufffd co mputed uring runtime by multiplication using the precomputed stochastic factors of the parsing actions ( or by addition if the stochastic factors are expressed in logarithms ",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 112,
"text": "Table A probabilistic LR table is an augmented LR table of which the entries in the ACTION-table contains",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figu re 3: GRA3:A Massively Left-recursive PCFG",
"sec_num": null
},
{
"text": "S-+ S a1 1 ! (2) S-+ B a2 \ufffd (3) S-+ C a3 I (4) B-+ S a3 I (5) B-+ B a2 f (6) B-+ C a1 1 (7) C-+ S a2 j (8) C-+ B a3 \\5 (9) C-+ C a1 1 y (10) C-+ a3 B l ( 11) C-+ a3 1,;, 155",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figu re 3: GRA3:A Massively Left-recursive PCFG",
"sec_num": null
},
{
"text": "In this section, we describe how the efficient gen eralized LR parser with graph-structured stack in (Tomita 1985 (Tomita , 1987 can be adapted to parse prob abilistically using the augmented parsing table. In particular, we discuss how to maintain consistent runtime stochastic products base on three key no tions of the graph-structured stack: merging, lo cal ambiguity packing and splitting. We assume that the state number and the respective runtime stochastic product are stored at each stack node.",
"cite_spans": [
{
"start": 101,
"end": 113,
"text": "(Tomita 1985",
"ref_id": "BIBREF6"
},
{
"start": 114,
"end": 128,
"text": "(Tomita , 1987",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Probabilistic LR Parsers for Arbitrary PCFGs",
"sec_num": "3"
},
{
"text": "Merging occurs when an element is being shifted onto two or more of the stack tops. Figure 7 il lustrates a typical scenario in which a new state (State 3) is pushed onto stack tops States 1 and 2, of which original stochastic products are P1 and p2 respectively. These two nodes's stochas tic products are modified to P1 q 1 and p2q2 corre spondingly. If the stochastic factors of the actions has been represented as logarithms in the parse table, then their new \"product\" ( or rather, loga rithmic sums) would be P1 + q1 and P2 + q2 in stead. For the stochastic product of Node 3, we can either use the sum of its parents' products (giving p3 as P1 q 1 + p2q2) if we adopt strict prob abilistic approach, or the maximum of the prod ucts (ie, p3 = max (p1q1 , p2q2)) if we adopt the ",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 92,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Merging",
"sec_num": "3.1"
},
{
"text": "V $ NP VP s 0 (sh2, \ufffd ) (sh l , \u00bd) 4 3 1 (re2, 1) (re2, 1) 2 (sh5, 1 ) 3 {ace, 1} 4 (sh6, 1} 7 5 (re3, 1 ) (re3, 1) 6 (sh2, \u00be) (sh l , \u00bd) 8 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Merging",
"sec_num": "3.1"
},
{
"text": "{rel , 1} 8 (re4, 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Merging",
"sec_num": "3.1"
},
{
"text": "Figure 5: Probabilistic Parsing Table for GRA2",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 41,
"text": "Table for",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Merging",
"sec_num": "3.1"
},
{
"text": "State ACTION det n V 0 (sh2, ! ) (s h l , \u00be} 1 (re3, l} 2 {sh5, l} 3 (sh7, ,\ufffdn } 4 5 (re4, 1} 6 (sh2, ! } (sh l , \u00be} 7 (sh2, ! } (sh l , \u00be} 8 (re5, l } 9 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Merging",
"sec_num": "3.1"
},
{
"text": "(re6, 1\ufffd0 } 12 (re7, t0 } maximum likelihood approach . Note that although the maximum likelihood approach is in some sense less \"accurate\" than the strict probabilistic ap proach, it is a reasonable approximate and has an added advantage when the stochastic factors are represented in logarithms, in which case the stochastic \"products\" of the parse stack can be maintained using only addition and subtraction operators( assuming, of course, that additions and subtractions are \"cheaper\" computationally than multiplications and divisions).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "11",
"sec_num": null
},
{
"text": "Local ambiguity packing occurs when two or more branches of the stack are reduced to the same non terminal symbol. To be precise, this occurs when the parser attempts to create a GOTO state node (after a reduce action, that is) and realize that the parent already has a child node of the same state. In this case there is no need to create the GOTO node but to use that child node ( \"pack ing\" ). This is equivalent to the merging of shift nodes, and can be handled similarly: the runtime product of the child node is modified to the new \"merged\" product ( either by summation or max imalization). This modification should be propa gated accordingly to the successors of the packed child node, if any. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Ambiguity Packing",
"sec_num": "3.2"
},
{
"text": "prep re3, 1 (sh6, ,1 0 } (sh6, \u00bc} re4, 1 re5, 1 rel, l _re2, 1> (re6, 1 \ufffd0} (sh6, ,1 0 } (re7, fa} (sh6, ,1 0 ) $ re3, 1 (ac e, \ufffd} re4, 1 re5, 1 rel, l1 re2, l (re6, 1 \ufffd0 } (re7, to } GOTO NP pp VP s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Ambiguity Packing",
"sec_num": "3.2"
},
{
"text": "a1 a2 0 1 (rell,\ufffd} 2 (sh9, \u00bd) (sh8, ,, ,j \ufffd ) 3 (shll, :\ufffd ) 4 (sh13, m) 5 (sh9, \ufffd) (sh8 , ? 6 ._ 6 0 ) 6 (1'e10, 27{4 ) (sh1 5, \ufffd11 ) 7 (sh1 6, {;\ufffd) 8 :re7, 11 g c rel, 11",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Ambiguity Packing",
"sec_num": "3.2"
},
{
"text": "Tre l, l} 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Ambiguity Packing",
"sec_num": "3.2"
},
{
"text": "1 re4, 1 1 {re4, 1 } 11 (re2, t\ufffd ) (re2, t\ufffd ) (re5, \ufffd7} (re5, '.'\\7) 12 re8, 1 13 (re6, to\\) (re6, too7 ) (re9, \ufffd} 14 re3, 1 {re3, 1 } 15 (1\u2022e2, w) (re2, w) (re5, 11 \ufffd } (re5, 11 '.'\\) 16 (re \ufffd , \\\ufffd\ufffd ) (re6, \ufffd\ufffd\ufffd) (re9, T?R)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Ambiguity Packing",
"sec_num": "3.2"
},
{
"text": "Split. ti ng occurs when there is an action conflict. Th is can be handled straightforwardly by creat ing corresp onding new nodes for the new resulting states with the respective runtime products (such as the product of the parent's stochastic prod uct with the action's stochastic factor). Splitting can also occur when reducing (popping) a merged node. In this case, the parser needs to recover the original runtime product of the merged com ponents, which can be obtained with some math ematical manipulation from the runtime products recorded in the merged node's parents. Figure 8 illustrates a simple situation in which a merged node is split into two. In the figure, a reduce action ( of which the corresponding production is of unit length) is applied at Node 3, and the GOTO's for Nodes 1 and 2 are states 4 and 5 respectively. In the case that strict probabilis tic approach is used in merging (see above), we get p 4 = P i7+ P2 p3 q and Ps = P i7+ P2 p3q. If the maximum likelihood approach is used, then p 4 = m ax f; 1 ,1':!) p 3q and ]Js = m a x f; 1 ,p 2) p3q. Further more, if the stochastic fa ctors have been expressed in lognrit.l1ms, t.hen p . . , = J)3 -max (p1 , P2 ) + PI+ q and 71\" = JJ::1 -max (71 1 ,pJ+p:2 +q (notice that only a3 $",
"cite_spans": [],
"ref_spans": [
{
"start": 578,
"end": 586,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Splitting",
"sec_num": "3.3"
},
{
"text": "s B C shl, 1 \u2022 2 3 4 (rell, J) 5 6 7 (shl, \ufffd) (shl O, '> bn; } (ac e, \ufffd) (sh12, :\u00bc\u00bd ) (sh14, 1\ufffd ) (shlO, \ufffd ) (rel O, \ufffd4 ) (sh12, 1 \ufffd ) (sh14, ,f\ufffd ) (re7, 1 rel, 1 Tre l, l} 1 re4, 1 (re2, t\ufffd ) (re2, \ufffd; ) (re5, \ufffd7) re8, 1 (re6, \\Ob7) (re9, \ufffd) re3, 1 Tre3, 1} -(re2, w) (re2, /1\\ ) (re5, 11 \ufffd } (re6, \\\ufffd\ufffd ) (re9, 1\ufffd }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Splitting",
"sec_num": "3.3"
},
{
"text": "addition and subtraction are needed, as promised) . ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Splitting",
"sec_num": "3.3"
},
{
"text": "In general, there may be more than one splitting corresponding to a reduce action (ie, we may have to pop more than one merged nodes). For every split node, we must recover the runtime products of its parents to obtain the appropriate stochas tic products for the resulting new branches. This can be tricky and is one of the reasons why a tree-structured stack ( described below) instead of graphs might perform better in some cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P2",
"sec_num": null
},
{
"text": "The main point of maintaining the runtime stochastic products is to use it as a good indicator function to guide search. In practical situation, the grammar can be highly ambiguous, resulting in many branches of ambiguity in the parse stack. As discussed before, the runtime stochastic prod uct reflects the likelihood of that branch to com plete successfully. In To mita's generalized LR parser, processes are synchronized by performing all the reduce actions before the shift actions. In this way, the processes are made to scan the input at the same rate, which in turn allows the unification of processes in the same state. Thus, the runtime stochastic products can be a good enough indicator of how promising each branch (ie. partial derivation) is, since we are comparing among partial derivations of same in put length. We can perform beam search by prun ing away branches which are less promising.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Stochastic Product to Guide Search",
"sec_num": "3.4"
},
{
"text": "If instead of the breadth-first style beam search approach described above we employ a best first ( or depth-first) strategy, then not all of the branches will correspond to the same input length. Since the measure of runtime stochastic product is biased towards shorter sentences, a good heuris tic would have to take into account of the num ber of input symbols consumed. Even so, han dling best-first search can be tricky with To mita's graph-structured stack without the process-input synchronization, especially with the merging and packing of nodes. Presumably, we can have ad ditional data structure to serve as lookup table of the nodes currently in the graph stack: for in stance, an n by m matrix ( where n is the num ber of states in the parse table and m the in put length) indexed by the state number and the input position storing pointers to current stack nodes. With this lookup table, the parser can check if there is any stack node it can use before creating a new one. However, in the worst case, the nodes that could have been merged or packed might have already been popped of the stack be fore it can be re-used. In this case, the parser degenerates into one with tree-structured stack (ie, only splitting, but no merging and packing) and the laborious book-keeping of the stochastic products due to the graph structure of the parse stack seems wasted. It might be more productive then to employ a tree-structured stack instead of a graph-structured stack, since the book-keeping of runtime stochastic products for trees is much simpler: as each tree branch represents exactly one possible parse, we can associate the respec tive runtime stochastic products to the leaf nodes (instead of every node) in the parse stack, and up dating would involve only multiplying ( or adding, in the logarithmic case) with the stochastic fac tors of the corresponding parse actions to obtain the new stochastic products. The major draw back of the tree-stack version is that it is merely a. slightly compacted form of stack list (Tomita 1987 )which means that the tree can grow un manageably large in a short period, unless suitable pruning is done. Hopefully, the runtime stochastic product will serve as good heuristic for pruning the branches; but whether it is the case that the sim plicity of the tree implementation overrides that of the representational efficiency of the graph version remains to be studied.",
"cite_spans": [
{
"start": 2036,
"end": 2048,
"text": "(Tomita 1987",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using Stochastic Product to Guide Search",
"sec_num": "3.4"
},
{
"text": "The approach to probabilistic LR table construc tion for non-left recursive PCFG , as proposed by Wright and Wrigley(1989) , is to augment the stan dard SLR table construction algorithm presented in Aho and Ullman(1977) to generate a proba bilistic version. The notion of a probabilistic item (A -+ o:\u2022/3, p) is introduced, with (A -+ o: \u2022 /3) being an ordinary LR(O) item, and p the item probabil ity, which is interpreted as the posterior probabil ity of the item in the state. The major extension is the computation of these item probabilities from which the stochastic factors of the parse actions can be determined. Wright and Wrigley(1989) have shown a direct method for computing the item probabilities for non-left recursive grammars. The probabilistic parsing table in Figure 4 for the non-left recursive grammar GRAl is thus con structed.",
"cite_spans": [
{
"start": 98,
"end": 122,
"text": "Wright and Wrigley(1989)",
"ref_id": "BIBREF10"
},
{
"start": 199,
"end": 219,
"text": "Aho and Ullman(1977)",
"ref_id": "BIBREF1"
},
{
"start": 621,
"end": 645,
"text": "Wright and Wrigley(1989)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 778,
"end": 786,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Problem with Left Recursion",
"sec_num": "4"
},
{
"text": "Since there is an algorithm for removing left re cursions from a context-free grammar (Aho and Ullman 1977) , it is conceivable that the algo rithm can be modified to convert a left-recursive PCFG to one that is non left-recursive. Given a left-recursive PCFG, we can apply this algo rithm, and then use Wright and Wrigley( 1989) 's table construction method on the resulting non left-recursive grammar to create the parsing ta ble. Unfortunately, the left-recursion elimination algorithm destructs the original grammar struc ture. In practice, especially in natural language processing, it is often necessary to preserve the original grammar structure. Hence a method for constructing a parse table without grammar con version is needed.",
"cite_spans": [
{
"start": 86,
"end": 107,
"text": "(Aho and Ullman 1977)",
"ref_id": "BIBREF1"
},
{
"start": 304,
"end": 329,
"text": "Wright and Wrigley( 1989)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem with Left Recursion",
"sec_num": "4"
},
{
"text": "For grammars with left recursion, the computa tion of item probabilities becomes nontrivia. l. First of all, item probability ceases to be a \"probabil ity\" , as an item which is involved in left recursion is effectively a coalescence of an infinite number of similar items along the cyclic paths, so its as sociated stochastic value is the sum of posteriori probabilities of these packed items. For instance, and there is no guarantee that q \ufffd l. This is un derstandable since (C '----+--B,, q) is a coalescence of items which are not necessarily mutually ex clusive. However, we need not be alarmed as the stochastic values of the underlying items are still legitimate probabilities.",
"cite_spans": [],
"ref_spans": [
{
"start": 477,
"end": 494,
"text": "(C '----+--B,, q)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Problem with Left Recursion",
"sec_num": "4"
},
{
"text": "Owii1g to this coalescence of infinite items into one single item in left recursive grammars, the computation of the stochastic values of items in volves finding infinite sums of the items' stochastic values. For grammars with simple left recursion (that is, there are only finitely many left recursion loops) such as GRA2, we can still figure out the sum by enumeration, since there is only a finite number of the infinite sums corresponding to the left recursion loops. With massive left recursive gramma.rs like GRA3 in which there is an infinite number of (intermingled) left recursion loops, the enumeration method fails. We shall illustrate this effect in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem with Left Recursion",
"sec_num": "4"
},
{
"text": "For grammars with simple left recursion, it is pos sible to derive the stochastic values by simple cycle clct.ect.ion. For instance, consider the following set of LR(0) items for GRA2 in Figure 9 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 195,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Simple Left Recursion",
"sec_num": "4.1"
},
{
"text": "S1 = (So X \u00bd)+(So X ft x \u00bd)+(So X t/ X \u00bd) + ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simple Left Recursion",
"sec_num": "4.1"
},
{
"text": "For grammars with intermingled left recursions such as GRA3, computation of the stochastic val ues of the items becomes a convoluted task. Con-sider the start state for GRA3, which is depicted in Figure 10 . ls : Baa, Ss] ]9 :",
"cite_spans": [
{
"start": 213,
"end": 221,
"text": "Baa, Ss]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 196,
"end": 205,
"text": "Figure 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "Massive Left Recursion",
"sec_num": "4.2"
},
{
"text": "[C -\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Massive Left Recursion",
"sec_num": "4.2"
},
{
"text": "[C --Cai , -Sg] 110 : [C -\u2022aaB, S1 0 ] \u2022 111 , : [C -\u2022aa, Su]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Massive Left Recursion",
"sec_num": "4.2"
},
{
"text": "Consider the item 11 . In an attempt to write down a closed expression for the stochastic value S1 , we discover in despair that there is an infi nite number of loops to detect, as S is immedi ately reachable . by all n_ on-terminals, and so are the other nonterminals themselves. This intermin gling of the _loops renders it impossible to write down closed expressions for S 1 through Su .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Massive Left Recursion",
"sec_num": "4.2"
},
{
"text": "In this section, we describe a way of computing item probabilities by encoding the item depen dencies in terms of systems of linear equations and solving them by Gaussian Elimination (Strang 1980) . This method handles arbitrary context free grammar including those with .left recursions. We incorporate this method with Wright and W:rigley's(1989) algorithm for computing stochas tic \u2022 factors for the parse actions to obtain a ta ble construction algorithm which handles general PCFG. A formal description of the complete table construction algorithm is in the Appendix . In the following \u2022discussion of the algorithm, lower case greek characters such as a and /3 will denote strings in (N U Tt' and upper case alpha bets like A and B denote symbols in N unless mentioned otherwise.",
"cite_spans": [
{
"start": 183,
"end": 196,
"text": "(Strang 1980)",
"ref_id": "BIBREF4"
},
{
"start": 321,
"end": 348,
"text": "Wright and W:rigley's(1989)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Parse Table Construction for Left Recursive Grammars",
"sec_num": "5"
},
{
"text": "For completeness, we mention briefly here how the stochastic values of items in the kernel set can be computed as proposed by Wright and Wrigley(1989) : ",
"cite_spans": [
{
"start": 126,
"end": 150,
"text": "Wright and Wrigley(1989)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Values of Kernel Items",
"sec_num": "5.1"
},
{
"text": "The",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Values of Kernel Items",
"sec_num": "5.1"
},
{
"text": "The inter-dependency of items within a state can be represented most straightforwardly by a depen dency forest. If we label each arc by the proba bility of the rule represented by that item the arc is pointing at, then the posterior probability of an item in a dependency forest is simply the total product of the root item's stochastic value and the arc costs along the path from the root to the item. This dependency forest can be compacted into a dependency graph in which no item occurs in more than one node. That is, each graph node represents a stochastic item which is a coalesce of all the nodes in the dependency forest represent ing that particular item. The stochastic value of such an item is thus the sum of the posterior prob abilities of the underlying items. Figure 11 depicts the graphical relations of the items in the example state of GRA2 in Figure 9 . We shall not attempt to depict the massively cyclic dependency graph of the start state for GRA3 ( Figure 10) here. ",
"cite_spans": [],
"ref_spans": [
{
"start": 776,
"end": 785,
"text": "Figure 11",
"ref_id": null
},
{
"start": 863,
"end": 871,
"text": "Figure 9",
"ref_id": null
},
{
"start": 973,
"end": 983,
"text": "Figure 10)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dependency Graph",
"sec_num": "5.2"
},
{
"text": "Rather than attempting to write down a closed expression for the stochastic value of each item, we resort to creating a system of linear equations in terms of the stochastic values which encapsu late the possibly cyclic dependency structure of the items in the set. Consider a state \\JI with k items, m of which are kernel items. That is, \\JI is the set of items { I j 11 :S j :S k} such that I i is a kernel item if 1 :S j :S rn.. Again, let S i be a variable represent ing the stochastic value of item I j . The values of 160 S 1 , ... , Sm are known since they can be computed as outlined in Section 5 .1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Linear Equations",
"sec_num": "5.3"
},
{
"text": "I j , m < j :S k. Let { Ii1 1 \u2022 \u2022 \u2022 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consider a non-kernel item",
"sec_num": null
},
{
"text": "lj n, } be the set of items in 'P from which there is an arc into I j in the dependency graph for 'Ill. Also, let P j i denote the arc cost of the arc from item I i i to I j . Then, the equation for the stochastic value of I j , namely S j , would be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consider a non-kernel item",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "n' S j = L P i i X S j i i = l",
"eq_num": "(1)"
}
],
"section": "Consider a non-kernel item",
"sec_num": null
},
{
"text": "Note that Equation 1 ... , Sk . This means that from 1 we have a system of (k-m) linear equations with (k -m) unknowns. This can be solved using standard algorithms like simple Gaussian Elimination (Strang 1980) . The task of generating the equations can be fur ther simplified by the following observations:",
"cite_spans": [
{
"start": 21,
"end": 31,
"text": "... , Sk .",
"ref_id": null
},
{
"start": 198,
"end": 211,
"text": "(Strang 1980)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Consider a non-kernel item",
"sec_num": null
},
{
"text": "1. The cost of any incoming arc of a non kernel item Ii = [Ai \ufffd \u2022ai, Si] is the produc tion probability of the production (Ai -+ Cl'i, P r ) -In other words, P j i = Pr for i = 1 ... n'. Equation 1can then be simplified",
"cite_spans": [
{
"start": 58,
"end": 72,
"text": "[Ai \ufffd \u2022ai, Si]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Consider a non-kernel item",
"sec_num": null
},
{
"text": "n' to S j = Pr X L i = l S j ;\u2022 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consider a non-kernel item",
"sec_num": null
},
{
"text": ". Within a state, the non-kernel items repre senting any X-production have the same set of items with arcs into them. Therefore, these npn-kernel items have the same value for L;= l S r,, (which is similar to the Sx in Section 5 .1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consider a non-kernel item",
"sec_num": null
},
{
"text": "Thus, Equation (1) can be further simplified",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consider a non-kernel item",
"sec_num": null
},
{
"text": "n' . as S j = P r X SA j where SA j = L x= l S r,, .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consider a non-kernel item",
"sec_num": null
},
{
"text": "With that, the system of linear equations for each state can be generated efficiently without having to con struct explicitly the item dependency graph .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consider a non-kernel item",
"sec_num": null
},
{
"text": "The system of linear equations for the state de picted in Figures 9 and 11 for grammar G RA2 is as \u00a3 11",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 74,
"text": "Figures 9 and 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "Examples",
"sec_num": "5.3.1"
},
{
"text": ". So = f (Given) S2 = \u00be(S0 + S3 ) 0 ows . S 1 = 2 (50 + S3 ) S3 = k (So + S3 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "5.3.1"
},
{
"text": "On solving the equations, we have S 1 = 2 5 1, S2 = 2 4 1 and S3 = l 1 , which is the same solution as the one obtained by enumeration (Section 4.1) . Similarly, the following system of linear equa tions is obtained for the start state of massively left recursive grammar GRA3: So = 1 S6 = t (S2 + S5 + Ss ) S1 = t (So + S1 + 84 + S1 ) S1 = - On solvinp; the equations, we have the solutions 29 11 6 s\ufffd 64 3 2 96 \ufffd 1 l 2 and 1. for the 1, 77 , 77 , 77 , 77 , 77 , 77 , 7, 7, 7, 7 ' 7 . stochastic variables S o through Su respectively.",
"cite_spans": [
{
"start": 135,
"end": 148,
"text": "(Section 4.1)",
"ref_id": null
},
{
"start": 444,
"end": 485,
"text": "77 , 77 , 77 , 77 , 77 , 7, 7, 7, 7 ' 7 .",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "5.3.1"
},
{
"text": "The systems of linear equations generated during table construction can be solved using the popular method Gaussian Elimination which can be found in many numerical analysis or linear algebra text books (for example, Strang 1980) or linear pro gramming books (such as Vasek Ch \ufffd atal, 1983) . The basic idea is to eliminate the variables one by one by repeated substitutions. For instance, if we have the following set of equations:",
"cite_spans": [
{
"start": 217,
"end": 229,
"text": "Strang 1980)",
"ref_id": "BIBREF4"
},
{
"start": 259,
"end": 290,
"text": "(such as Vasek Ch \ufffd atal, 1983)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solving Linear Equations with Gaussian Elimination",
"sec_num": "5.4"
},
{
"text": "(1) S1 = a11S1 + a12S2 + ... + a1 n S n (n) Sn = a.n 1S1 + a n 2S2 + \u2022 \u2022 \u2022 + annSn .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solving Linear Equations with Gaussian Elimination",
"sec_num": "5.4"
},
{
"text": "We can eliminate S 1 and remove equation (1) from the system by substituting, for all oc \ufffd ur rences of S1 in equations (2) through (n), the right hand side of equation 1. We repeatedly remove variables S1 through Sn -1 in the same way, until we are left with only one equation with one vari able S n . Having thus obtained the value for S n , we perform back substitutions until solutions for S 1 through S n are obtained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solving Linear Equations with Gaussian Elimination",
"sec_num": "5.4"
},
{
"text": "Complexity-wise, Gaussian elimination is a cu bic algorithm (Vasek Chvatal, 1983) in terr !1 s of t \ufffd e number of variables (ie, the number of items m the closure set) . The generation of linear equa tions per state is also polynomial since we only need to find the stochastic sum expressions the SA . 's, for the nonterminals (Point 2 of Sec tion 5.3). These expressions can be obtc1:ined _ by partitioning the items in the state set accordm _ g to their left hand sides. There are 0( mn) possi ble LR(O) items (hence the size of each state is O( mn)) and 0(2 mn ) possible sets where n is the number of productions and m the length of the longest right hand side. Hence, asymptotically, the computation of the stochastic values would not affect the complexity of the algorithm, since it has only added an extra polynomial amount of work for ea. eh of the exponentially many possible sets.",
"cite_spans": [
{
"start": 60,
"end": 81,
"text": "(Vasek Chvatal, 1983)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solving Linear Equations with Gaussian Elimination",
"sec_num": "5.4"
},
{
"text": "Of course, we could have used other methods for solving these linear equations, for example, by finding the inverse of the matrix representing the equations (Vasek Chvatal, 1983) . It is also plausi ble that particular characteristics of the equations generated by the construction algorithm can be exploited to derive the equations' solution more efficiently. We shall not discuss further here.",
"cite_spans": [
{
"start": 157,
"end": 178,
"text": "(Vasek Chvatal, 1983)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solving Linear Equations with Gaussian Elimination",
"sec_num": "5.4"
},
{
"text": "Since the stochastic values of the terminal items in a parse state are basically posterior probabili-ties of that item given the root (kernel) item, the computation of the stochastic factors for the pars ing actions, which is as presented in Wright a _n d Wrigley(1989) , is fairly straightfor ': ard. For sh ! ft action say from State i to State z + 1 on seemg the in\ufffdut symbol x, the corresponding stochas tic factor for this action would be Sr, the sum of the stochastic values of all the leaf items in State i which are expecting the symbol x. For reduce-action, the stochastic factor is simply the stochastic value S i of the item representing the re duction, namely [Ai \ufffd Oi \u2022 , Si ] if the red \ufffd ction is via production Ai \ufffd Oi . For accept-action, the stochastic factor is the stochastic value S n of the item [S' \ufffd S\u2022, S n ], since acceptance can be trea \ufffd ed as a final reduction of the augmented production S' \ufffd S, where S' is the system-introduced start symbol for the grammar.",
"cite_spans": [
{
"start": 242,
"end": 269,
"text": "Wright a _n d Wrigley(1989)",
"ref_id": null
},
{
"start": 672,
"end": 689,
"text": "[Ai \ufffd Oi \u2022 , Si ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sto chastic Factors",
"sec_num": "5.5"
},
{
"text": "The introduction of probability created a new cri terion for equality between two sets of items: not only must they contain the same items, they mu \ufffd t have the same item probability assignment. It 1s thus possible that we have many (possibly infi nite) sets of similar items of differing probability assignments. This is especially s \ufffd when there a \ufffd e loops amongst the sets of items (1e, the states) _ m the automaton created by the table construct10n algorithmthere is no guarantee that \ufffd he differ ing probability assignments of the recurrmg states would converge. Even if they do converge even \ufffd u ally, it is still undesirable to have a huge parsmg table of which many states have exactly the same underlying item set but differing probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deferred Probabilities",
"sec_num": "6"
},
{
"text": "To remedy this undesirable situation, we in troduce a mechanism called defe rred probability which will guarantee that the item sets converge without duplicating too many of the states. Thus far we have been precomputing item's stochas tic ' values in an eager fashion -propagating the probabilities as early as possible. Deferred _ proba bility provides a means to defer propagatmg cer tain problematic probability assignments ( Pr ? b lematic in the sense that it causes many s1m1lar states with differing probability assignments) un til appropriate. In the extreme case, probabilities are deferred until reduction time, ie, the stochas tic factors of REDUCE actions are the respec tive rule probabilities and all other parse actions have unit stochastic factors. A reasonable post ponement, however, would be to defer propagating the probabilities of the kernel items (kernel prob abilities) until the following state. By forcing the differing item sets to have some fixed predefined probability assignment (while deferring the pro \ufffd agation of the \"real\" probabiliti : s until \ufffd . pp \ufffd opri ate times), we can prevent excessive duplication of similar states with same items but different prob abilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deferred Probabilities",
"sec_num": "6"
},
{
"text": "To allow for deferred probabilities, we extend the original notion of probabilistic item to contain an additional field q which is the deferred proba bility for that item. That is, a probabilistic item would have the form (A -a \u2022 /3, p, q). The de fault value of q is 1, meaning that no probability has been deferred. If in the process of construct ing the closure states the table-construction pro gram discovers that it is re-creating many states with the same underlying items but with differing probabilities or when it detects a non-converging loop, it might decide to replace that state with one in which the original kernel probabilities are deferred. That is, if the item (A -a \u2022 /3, p, q) is a kernel item, and /3 =f. f , we replace it with a deferred item (A -a\u2022 {3, p', \ufffd ) and proceed to compute the closure of the kernel set as before (ie, ignoring the deferred probabilities). In essence we have reassigned a kernel probability of p' to the kernel items temporarily instead of its origi nal probability. It is important that this choice of assignment of p' be fixed with respect to that state. For instance, one assignment would be to impose a uniform probability distribution onto the deferred kernel items, that is, let p' be the prob ability Number of iern el items . Another choice is to assign unit.probability to each of the kernel items, which allows us to simulate the effect of treating each of the kernel items as if it forms a separate state.",
"cite_spans": [],
"ref_spans": [
{
"start": 680,
"end": 697,
"text": "(A -a \u2022 /3, p, q)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Deferred Probabilities",
"sec_num": "6"
},
{
"text": "Although in theory it is possible to defer the kernel probabilities until reduction time, in prac tice it is sufficient to defer it for only one state transition. That is, we recover the deferred prob abilities in the next state. We can do this by enabling the propagation of the deferred proba bilities in the next state, simply by multiplying back the deferred probabilities q into the kernel probabilities of the next state. In other words, as in Section 5.1, if [Ai -ai \u2022 X/3i,Si,q] is in State m -1, then the corresponding kernel item in State m would be [Ai -aiX \u2022 /3i , \ufffd ' 1].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deferred Probabilities",
"sec_num": "6"
},
{
"text": "In this paper, we have presented a method for deal ing with left recursions in constructing probabilis tic LR parsing tables for left recursive PCFGs. We have described runtime probabilistic LR parsers which use probabilistic parsing table. The table construction method, as outlined in this paper and more formally in the appendix, has been imple mented in Common Lisp. The two versions of run time parsers described in this paper have also been implemented in Common Lisp, and incorporated with various search strategies such as beam-search and best-first search ( only for the tree-stack ver-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "Algorithm A full algorithm for probabilistic LR parsing table construction for general probabilistic context-free grammar is presented here. The deferred proba bility mechanism as described in Section 6 is em ployed, the chosen reassignment of kernel proba bility being the unit probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A. Table Construction",
"sec_num": null
},
{
"text": "A.1.1 CLOSURE CLOSURE takes a set of ordinary nonproba bilistic LR(0) items and returns the set of LR(0) items which is the closure of the input items. A standard algorithm for CLOSURE can be found in Aho and Ullman(1977) .",
"cite_spans": [
{
"start": 201,
"end": 221,
"text": "Aho and Ullman(1977)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Auxiliary Functions",
"sec_num": null
},
{
"text": "A set of k probabilistic items for someOutput: A set of probabilistic items which is the closure of the input probabilistic items. Each probabilistic item in the output set carries a stochastic value which is the sum of the posterior probabilities of that item given the input items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.2 PROB-CLOSURE Input:",
"sec_num": null
},
{
"text": "Step 1:Step 2: Suppose k' is the size of C. Let Ii be the i-th item [Ai -ai./3i] in C, 1 \ufffd i \ufffd k'. Also, for each item Ii, letSi be a variable denoting its stochastic value.1. For 1 \ufffd i \ufffd k, Si := P i; 2. Let &B be the set of items in C that are expecting B as the next symbol on the stack. That is, &B is the setwhere P r is the probability of the production Ai -/3i .Step 3: Solve the system of linear equations gen' erated byStep 2 , using any stan dard algorithm such as simple Gaussian Elimination (Strang 1980) . When k = 0, GOTO( {Ii }, X) is undefined.",
"cite_spans": [
{
"start": 503,
"end": 516,
"text": "(Strang 1980)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method:",
"sec_num": null
},
{
"text": "Let U be the canonical collection of sets of prob abilistic items for the grammar G' . U can be con structed as described below .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.4 Sets-of-Items Construction",
"sec_num": null
},
{
"text": "Repeat the process of applying the GOTO func tion (as defined in Step A. Note that equality between two sets of proba bilistic items here requires that they contain the same items with equal corresponding stochastic values, as well as deferred probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initially U := PROB-CLOSURE({[S' --S, 1]}).",
"sec_num": null
},
{
"text": "The algorithm is very similar to standard LR ta ble construction (Aho and Ullman 1977) except for the additional step to compute the stochastic factor for eac;h action (shift , reduce , or accept). G = (N, T, R, S ) , we de fine a corresponding grammar G' with a system generated start symbol S':",
"cite_spans": [
{
"start": 65,
"end": 86,
"text": "(Aho and Ullman 1977)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 198,
"end": 215,
"text": "G = (N, T, R, S )",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.2 LR Table Construction",
"sec_num": null
},
{
"text": "Input: U, the canonical collection of sets of prob abilistic items for grammar G'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Given a grammar",
"sec_num": null
},
{
"text": "Method: Let U = {'110, '111, ... , '11n}, where W' 'Wi, set ACTION[i, a] to ( \"reduce A --+ a\" , p) for every a E FOLLOW(A).3. If [S' --+ S\u2022, p] is in 'Wi , set ACTION[i, $] ($ is an end-of-input marker) to ( \"accept'' , p ) . Aho and Ullman(1977) .",
"cite_spans": [
{
"start": 8,
"end": 22,
"text": "Let U = {'110,",
"ref_id": null
},
{
"start": 23,
"end": 28,
"text": "'111,",
"ref_id": null
},
{
"start": 29,
"end": 34,
"text": "... ,",
"ref_id": null
},
{
"start": 35,
"end": 41,
"text": "'11n},",
"ref_id": null
},
{
"start": 42,
"end": 50,
"text": "where W'",
"ref_id": null
},
{
"start": 51,
"end": 72,
"text": "'Wi, set ACTION[i, a]",
"ref_id": null
},
{
"start": 127,
"end": 144,
"text": "If [S' --+ S\u2022, p]",
"ref_id": null
},
{
"start": 151,
"end": 173,
"text": "'Wi , set ACTION[i, $]",
"ref_id": null
},
{
"start": 207,
"end": 224,
"text": "( \"accept'' , p )",
"ref_id": null
},
{
"start": 227,
"end": 247,
"text": "Aho and Ullman(1977)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 76,
"end": 99,
"text": "( \"reduce A --+ a\" , p)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Output: If possible, a probabilistic LR parsing table consisting of a parsing action function ACTION and a goto function GOTO.",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "sion) for comparison. The programs run success fully on various small toy grammars, including the ones listed in this paper. In future, we hope to ex perime:qt with larger grammars such as the one in Fuj isaki",
"authors": [],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "sion) for comparison. The programs run success fully on various small toy grammars, including the ones listed in this paper. In future, we hope to ex perime:qt with larger grammars such as the one in Fuj isaki(1 984).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Principles of Compiler Design",
"authors": [
{
"first": "A",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1977,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aho, A.V. and Ullman , J .D. 1977. Principles of Compiler Design. Addison Wesley.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Grammati cal Inference: Introduction and Survey -Part II",
"authors": [
{
"first": "K",
"middle": [
"S"
],
"last": "Fu",
"suffix": ""
},
{
"first": "T",
"middle": [
"L"
],
"last": "Booth",
"suffix": ""
}
],
"year": 1975,
"venue": "IEEE Tra ns on Sys., Man and Cy ber. SMC",
"volume": "5",
"issue": "",
"pages": "409--423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fu, K. S. , and Booth, T. L. , 1975. Grammati cal Inference: Introduction and Survey -Part II. IEEE Tra ns on Sys., Man and Cy ber. SMC-5:409- 423.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An Approach to Stochastic Parsing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Fujisaki",
"suffix": ""
}
],
"year": 1984,
"venue": "Proceedings of COLING8,4",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fujisaki, T. 1984. An Approach to Stochastic Parsing. Proceedings of COLING8,4.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Linear Algebra and Its Applica tions",
"authors": [
{
"first": "G",
"middle": [],
"last": "Strang",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strang, G. 1980. Linear Algebra and Its Applica tions, 2nd Ed. Academic Press, New York , NY.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Probabilistic Grammars for N a. t ural Languages",
"authors": [
{
"first": "P",
"middle": [],
"last": "Suppes",
"suffix": ""
}
],
"year": 1970,
"venue": "Synthese",
"volume": "22",
"issue": "",
"pages": "95--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suppes, P. 1970. Probabilistic Grammars for N a. t ural Languages. Synthese 22:95-116.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Effi cient Parsing fo r Natural Language",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tomita",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomita, M. 1985. Effi cient Parsing fo r Natural Language. Kluwer Academic Publishers, Boston , MA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An Effi cient Augmented Context-Free Parsing Algorithm",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tomita",
"suffix": ""
}
],
"year": 1987,
"venue": "Computational Linguistics",
"volume": "13",
"issue": "1-2",
"pages": "31--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomita, M. January-June, 1987. An Effi cient Augmented Context-Free Parsing Algorithm. Computational Linguistics 13(1-2):31-46.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Linear Programming",
"authors": [
{
"first": "Vasek",
"middle": [],
"last": "Chvatal",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasek Chvatal, 1983. Linear Programming, Chap ter 6.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Probabilistic Languages: A Review and Some Open Questions",
"authors": [
{
"first": "C",
"middle": [
"S"
],
"last": "Wetherall",
"suffix": ""
}
],
"year": 1980,
"venue": "Computing Surveys",
"volume": "12",
"issue": "",
"pages": "36--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wetherall, C. S. 1980. Probabilistic Languages: A Review and Some Open Questions. Computing Surveys 12:36 1-379.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Probabilistic LR Parsing for Speech Recognition. International Parsing Workshop '89",
"authors": [
{
"first": "J",
"middle": [
"H"
],
"last": "Wright",
"suffix": ""
},
{
"first": "E",
"middle": [
"N"
],
"last": "Wrigley",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wright, J .H. and Wrigley, E.N. 1989. Probabilistic LR Parsing for Speech Recognition. International Parsing Workshop '89, Carnegie Mellon Univer sity, Pittsburgh PA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "GRA2: A Left-recursive PCFG",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Probabilistic Parsing",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Figure 7: Merging",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Probabilistic Parsing",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "Figure 8: Splitting",
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"text": "if starting from item (A -+ a: \u2022 B/3, p) we derive the item ( C -+ \u2022 B,, p x p B), then by left recursion we must also have the items (C -+ \u2022B,, p x P k ) for i = 1, ... oo. The probabilistic item (C -+ \u2022B,, q) , being a coalescence of these items, would have item probability q = I::\ufffd 1 p x p\ufffd = \ufffd'",
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"uris": null,
"text": "--NP PP , Sa ] Suppose the kernel set contains only 1 0 , with So = \u00a5. Let V be a partial derivation before seeing the input symbol v. At this point, the possible derivations which ,vill lead to item Ji are: 1 'D\ufffd VP --.v-N P\ufffdNP-+\u2022n \u2022 v \ufffd VP ..:..... v-NP .Jb_ NP -+ -NP VP \ufffd NP -+ \u2022n VP --. v \u2022 NP \u00be NP -+ -NP VP \u00be ... :\u00bc-NP --n The sum of the posterior probabilities of the above possible partial derivations are:",
"num": null,
"type_str": "figure"
},
"FIGREF8": {
"uris": null,
"text": "Figure 10: Start State of GRA3 lo : lS' -\u2022S, 1] 11 : [S --Sa1 , Si ] 12 : [S --Ba2, S2 ] /J: [S \ufffd -Caa,\u2022 . Sa] ]4 : [B \ufffd \u2022Saa, S4] ]5 : [B --Ba2, Ss] 16 : [B --Cai, S6] h : [C --Sa2 , S 1]",
"num": null,
"type_str": "figure"
},
"FIGREF9": {
"uris": null,
"text": "Figure 11: A Dependency Graph [VP --+ v-NP,So ] 2 To 5 [NP --n, S1 ][NP --det n,S,] [NP",
"num": null,
"type_str": "figure"
},
"FIGREF10": {
"uris": null,
"text": "is a linear equation of at most (k -m) unknowns, namely S m + 1 ,",
"num": null,
"type_str": "figure"
},
"FIGREF11": {
"uris": null,
"text": ". d S3 + s6 + S9 ) S2 = I (So + S1 + S4 + S1 ) Ss = ft(S3 + S6 + S9 ) S3 = I (So + S1 + S4 + S1 ) S9 = tf (S3 +S e+ S9 ) 84 = f (S2 + 85 + Ss ) S1 0 = 3 (83 + S6 + S9 ) S5 = 6 (82 + S5 + Ss ) S11= ft(S3 + Se+ S9 )",
"num": null,
"type_str": "figure"
},
"TABREF2": {
"content": "<table><tr><td>State</td><td>det</td><td>n</td><td>ACTION</td><td>G RA l GOTO</td></tr></table>",
"html": null,
"num": null,
"text": "",
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td/><td>\u2022GRA3</td><td/></tr><tr><td>State</td><td>ACTION</td><td>GOTO</td></tr></table>",
"html": null,
"num": null,
"text": "",
"type_str": "table"
}
}
}
}