|
{ |
|
"paper_id": "E14-1036", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:40:50.020750Z" |
|
}, |
|
"title": "Deterministic Parsing using PCFGs", |
|
"authors": [ |
|
{ |
|
"first": "Mark-Jan", |
|
"middle": [], |
|
"last": "Nederhof", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of St Andrews", |
|
"location": { |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Mccaffery", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of St Andrews", |
|
"location": { |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose the design of deterministic constituent parsers that choose parser actions according to the probabilities of parses of a given probabilistic context-free grammar. Several variants are presented. One of these deterministically constructs a parse structure while postponing commitment to labels. We investigate theoretical time complexities and report experiments.", |
|
"pdf_parse": { |
|
"paper_id": "E14-1036", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose the design of deterministic constituent parsers that choose parser actions according to the probabilities of parses of a given probabilistic context-free grammar. Several variants are presented. One of these deterministically constructs a parse structure while postponing commitment to labels. We investigate theoretical time complexities and report experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Transition-based dependency parsing (Yamada and Matsumoto, 2003; Nivre, 2008) has attracted considerable attention, not only due to its high accuracy but also due to its small running time. The latter is often realized through determinism, i.e. for each configuration a unique next action is chosen. The action may be a shift of the next word onto the stack, or it may be the addition of a dependency link between words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 64, |
|
"text": "(Yamada and Matsumoto, 2003;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 65, |
|
"end": 77, |
|
"text": "Nivre, 2008)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Because of the determinism, the running time is often linear or close to linear; most of the time and space resources are spent on deciding the next parser action. Generalizations that allow nondeterminism, while maintaining polynomial running time, were proposed by (Huang and Sagae, 2010; Kuhlmann et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 267, |
|
"end": 290, |
|
"text": "(Huang and Sagae, 2010;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 313, |
|
"text": "Kuhlmann et al., 2011)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This work has influenced, and has been influenced by, similar developments in constituent parsing. The challenge here is to deterministically choose a shift or reduce action. As in the case of dependency parsing, solutions to this problem are often expressed in terms of classifiers of some kind. Common approaches involve maximum entropy (Ratnaparkhi, 1997; Tsuruoka and Tsujii, 2005) , decision trees (Wong and Wu, 1999; Kalt, 2004) , and support vector machines (Sagae and Lavie, 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 339, |
|
"end": 358, |
|
"text": "(Ratnaparkhi, 1997;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 359, |
|
"end": 385, |
|
"text": "Tsuruoka and Tsujii, 2005)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 422, |
|
"text": "(Wong and Wu, 1999;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 434, |
|
"text": "Kalt, 2004)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 488, |
|
"text": "(Sagae and Lavie, 2005)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The programming-languages community recognized early on that large classes of grammars allow deterministic, i.e. linear-time, parsing, provided parsing decisions are postponed as long as possible. This has led to (deterministic) LR(k) parsing (Knuth, 1965; Sippu and Soisalon-Soininen, 1990) , which is a form of shift-reduce parsing. Here the parser needs to commit to a grammar rule only after all input covered by the right-hand side of that rule has been processed, while it may consult the next k symbols (the lookahead). LR is the optimal, i.e. most deterministic, parsing strategy that has this property. Deterministic LR parsing has also been considered relevant to psycholinguistics (Shieber, 1983) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 256, |
|
"text": "(Knuth, 1965;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 291, |
|
"text": "Sippu and Soisalon-Soininen, 1990)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 692, |
|
"end": 707, |
|
"text": "(Shieber, 1983)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Nondeterministic variants of LR(k) parsing, for use in natural language processing, have been proposed as well, some using tabulation to ensure polynomial running time in the length of the input string (Tomita, 1988; Billot and Lang, 1989) . However, nondeterministic LR(k) parsing is potentially as expensive as, and possibly more expensive than, traditional tabular parsing algorithms such as CKY parsing (Younger, 1967; Aho and Ullman, 1972) , as shown by for example (Shann, 1991) ; greater values of k make matters worse (Lankhorst, 1991) . For this reason, LR parsing is sometimes enhanced by attaching probabilities to transitions (Briscoe and Carroll, 1993) , which allows pruning of the search space (Lavie and Tomita, 1993) . This by itself is not uncontroversial, for several reasons. First, the space of probability distributions expressible by a LR automaton is incomparable to that expressible by a CFG (Nederhof and Satta, 2004) . Second, because an LR automaton may have many more transitions than rules, more training data may be needed to accurately estimate all parameters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 216, |
|
"text": "(Tomita, 1988;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 217, |
|
"end": 239, |
|
"text": "Billot and Lang, 1989)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 407, |
|
"end": 422, |
|
"text": "(Younger, 1967;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 444, |
|
"text": "Aho and Ullman, 1972)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 471, |
|
"end": 484, |
|
"text": "(Shann, 1991)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 526, |
|
"end": 543, |
|
"text": "(Lankhorst, 1991)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 638, |
|
"end": 665, |
|
"text": "(Briscoe and Carroll, 1993)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 709, |
|
"end": 733, |
|
"text": "(Lavie and Tomita, 1993)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 917, |
|
"end": 943, |
|
"text": "(Nederhof and Satta, 2004)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The approach we propose here retains some important properties of the above work on LR parsing. First, parser actions are delayed as long as possible, under the constraint that a rule is committed to no later than when the input covered by its right-hand side has been processed. Second, the parser action that is performed at each step is the most likely one, given the left context, the lookahead, and a probability distribution over parses given by a PCFG.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There are two differences with traditional LR parsing however. First, there is no explicit representation of LR states, and second, probabilities of actions are computed dynamically from a PCFG rather than retrieved as part of static transitions. In particular, this is unlike some other early approaches to probabilistic LR parsing such as (Ng and Tomita, 1991) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 341, |
|
"end": 362, |
|
"text": "(Ng and Tomita, 1991)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The mathematical framework is reminiscent of that used to compute prefix probabilities (Jelinek and Lafferty, 1991; Stolcke, 1995) . One major difference is that instead of a prefix string, we now have a stack, which does not need to be parsed. In the first instance, this seems to make our problem easier. For our purposes however, we need to add new mechanisms in order to take lookahead into consideration.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 115, |
|
"text": "(Jelinek and Lafferty, 1991;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 116, |
|
"end": 130, |
|
"text": "Stolcke, 1995)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "It is known, e.g. from (Cer et al., 2010; Candito et al., 2010) , that constituent parsing can be used effectively to achieve dependency parsing. It is therefore to be expected that our algorithms can be used for dependency parsing as well. The parsing steps of shift-reduce parsing with a binary grammar are in fact very close to those of many dependency parsing models. The major difference is, again, that instead of general-purpose classifiers to determine the next step, we would rely directly on a PCFG.", |
|
"cite_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 41, |
|
"text": "(Cer et al., 2010;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 42, |
|
"end": 63, |
|
"text": "Candito et al., 2010)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The emphasis of this paper is on deriving the necessary equations to build several variants of deterministic shift-reduce parsers, all guided by a PCFG. We also offer experimental results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section, we summarize the theory of LR parsing. As usual, a context-free grammar (CFG) is represented by a 4-tuple (\u03a3, N, S, P ), where \u03a3 and N are two disjoint finite sets of terminals and nonterminals, respectively, S \u2208 N is the start symbol, and P is a finite set of rules, each of the form A \u2192 \u03b1, where A \u2208 N and \u03b1 \u2208 (\u03a3 \u222a N ) * . By grammar symbol we mean a terminal or nonterminal. We use symbols A, B, C, . . . for nonterminals, a, b, c, . . . for terminals, v, w, x, . . . for strings of terminals, X for grammar symbols, and \u03b1, \u03b2, \u03b3, . . . for strings of grammar symbols. For technical reasons, a CFG is often augmented by an additional rule S \u2020 \u2192 S$, where S \u2020 / \u2208 N and $ / \u2208 \u03a3. The symbol $ acts as an end-of-sentence marker.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As usual, we have a (right-most) 'derives' relation \u21d2 rm , \u21d2 * rm denotes derivation in zero or more steps, and \u21d2 + rm denotes derivation in one or more steps. If d is a string of rules", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u03c0 1 \u2022 \u2022 \u2022 \u03c0 k , then \u03b1 d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u21d2 rm \u03b2 means that \u03b2 can be derived from \u03b1 by applying this list of rules in right-most order. A string \u03b1 such that S \u21d2 * rm \u03b1 is called a rightsentential form.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The last rule A \u2192 \u03b2 used in a derivation S \u21d2 + rm \u03b1 together with the position of (the relevant occurrence of) \u03b2 in \u03b1 we call the handle of the derivation. In more detail, such a derivation can be written as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "S = A 0 \u21d2 rm \u03b1 1 A 1 \u03b2 1 \u21d2 * rm \u03b1 1 A 1 v 1 \u21d2 rm \u03b1 1 \u03b1 2 A 2 \u03b2 2 v 2 \u21d2 * rm . . . \u21d2 * rm \u03b1 1 \u2022 \u2022 \u2022 \u03b1 k\u22121 A k\u22121 v k\u22121 \u2022 \u2022 \u2022 v 1 \u21d2 rm \u03b1 1 \u2022 \u2022 \u2022 \u03b1 k\u22121 \u03b2v k\u22121 \u2022 \u2022 \u2022 v 1 , where k \u2265 1, and A i\u22121 \u2192 \u03b1 i A i \u03b2 i (1 \u2264 i < k) and A k\u22121 \u2192 \u03b2 are in P .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The underlined symbols are those that are (recursively) rewritten to terminal strings within the following relation \u21d2 rm or \u21d2 * rm . The handle here is A k\u22121 \u2192 \u03b2, together with the position of \u03b2 in the right-sentential form, just after", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u03b1 1 \u2022 \u2022 \u2022 \u03b1 k\u22121 . A prefix of \u03b1 1 \u2022 \u2022 \u2022 \u03b1 k\u22121 \u03b2 is called a viable prefix in the derivation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Given an input string w, a shift-reduce parser finds a right-most derivation of w, but in reverse order, identifying the last rules first. It manipulates configurations of the form (\u03b1, v$), where \u03b1 is a viable prefix (in at least one derivation) and v is a suffix of w. The initial configuration is (\u03b5, w$), where \u03b5 is the empty string. The two allowable steps are (\u03b1, av$) (\u03b1a, v$), which is called a shift, and (\u03b1\u03b2, v$) (\u03b1A, v$) where A \u2192 \u03b2 is in P , which is called a reduce. Acceptance happens upon reaching a configuration (S, $).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A 1-item has the form [A \u2192 \u03b1 \u2022 \u03b2, a], where A \u2192 \u03b1\u03b2 is a rule. The bullet separates the righthand side into two parts, the first of which has been matched to processed input. The symbol a \u2208 \u03a3 \u222a {$} is called the follower.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In order to decide whether to apply a shift or reduce after reaching a configuration", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(X 1 \u2022 \u2022 \u2022 X k , w)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": ", one may construct the sets I 0 , . . . , I k , inductively defined as follows, with 0 \u2264 i \u2264 k:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 if S \u2192 \u03c3 in P , then [S \u2192 \u2022 \u03c3, $] \u2208 I 0 , \u2022 if [A \u2192 \u03b1 \u2022 B\u03b2, a] \u2208 I i , B \u2192 \u03b3 in P , and \u03b2 \u21d2 * rm x, then [B \u2192 \u2022 \u03b3, b] \u2208 I i , where b = 1 : xa, \u2022 if [A \u2192 \u03b1 \u2022 X i \u03b2, a] \u2208 I i\u22121 then [A \u2192 \u03b1X i \u2022 \u03b2, a] \u2208 I i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(The expression 1 : y denotes a if y = az, for some a and z; we leave it undefined for y = \u03b5.) Exhaustive application of the second clause above will be referred to as the closure of a set of items.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "It is not difficult to show that if [A \u2192 \u03b1 \u2022, a] \u2208 I k , then \u03b1 is of the form X j+1 \u2022 \u2022 \u2022 X k , some j, and A \u2192 \u03b1 at position j + 1 is the handle of at least one derivation S \u21d2 * rm X 1 \u2022 \u2022 \u2022 X k ax, some x.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "If furthermore a = 1 : w, where 1 : w is called the lookahead of the current configuration", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(X 1 \u2022 \u2022 \u2022 X k , w)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": ", then this justifies a reduce with A \u2192 \u03b1, as a step that potentially leads to a complete derivation; this is only 'potentially' because the actual remaining input w may be unlike ax, apart from the matching one-symbol lookahead.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Similarly", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": ", if [A \u2192 \u03b1 \u2022 a\u03b2, b] \u2208 I k , then \u03b1 = X j+1 \u2022 \u2022 \u2022 X k ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "some j, and if furthermore a = 1 : w, then a shift of symbol a is a justifiable step. Potentially, if a is followed by some x such that \u03b2 \u21d2 * rm x, then we may eventually obtain a stack X 1 \u2022 \u2022 \u2022 X j \u03b1a\u03b2, which is a prefix of a rightsentential form, with the handle being A \u2192 \u03b1a\u03b2 at position j + 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For a fixed grammar, the collection of all possible sets of 1-items that may arise in processing any viable prefix is a finite set. The technique of LR(1) parsing relies on a precomputation of all such sets of items, each of which is turned into a state of the LR(1) automaton. The initial state con-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "sists of closure({[S \u2192 \u2022 \u03c3, $] | S \u2192 \u03c3 \u2208 P }).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The automaton has a transition labeled X from", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "I to J if goto(I, X) = J, where goto(I, X) = closure({[A \u2192 \u03b1X \u2022 \u03b2, a] | [A \u2192 \u03b1 \u2022 X\u03b2, a] \u2208 I}).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the present study, we do not precompute all possible states of the LR(1) automaton, as this would require prohibitive amounts of time and memory. Instead, our parsers are best understood as computing LR states dynamically, while furthermore attaching probabilities to individual items.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the sequel we will assume that all rules either have the (lexical) form A \u2192 a, the (binary) form A \u2192 BC, or the (unary) form A \u2192 B. This means that A \u21d2 * rm \u03b5 is not possible for any A. The end-of-sentence marker is now introduced by two augmented rules S \u2020 \u2192 SS $ and S $ \u2192 $.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Probabilistic shift-reduce parsing A probabilistic CFG (PCFG) is a 5-tuple (\u03a3, N, S, P , p), where the extra element p maps rules to probabilities. The probability of a derivation \u03b1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "d \u21d2 rm \u03b2, with d = \u03c0 1 \u2022 \u2022 \u2022 \u03c0 k , is defined to be p(d) = i p(\u03c0 i ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The probability p(w) of a string w is defined to be the sum of p(d) for all d with", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "S d \u21d2 rm w.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We assume properness, i.e. \u03c0=A\u2192\u03b1 p(\u03c0) = 1 for all A, and consistency, i.e. w p(w) = 1. Properness and consistency together imply that for each nonterminal A, the sum of p(d) for all d with", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2203 w A d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u21d2 rm w equals 1. We will further assume an augmented PCFG with extra rules S \u2020 \u2192 SS $ and S $ \u2192 $ both having probability 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Consider a viable prefix A 1 \u2022 \u2022 \u2022 A k on the stack of a shift-reduce parser, and lookahead a. Each right-most derivation in which the handle is A \u2192 A k\u22121 A k at position k \u2212 1 must be of the form sketched in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 217, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Because of properness and consistency, we may assume that all possible subderivations generating strings entirely to the right of the lookahead have probabilities summing to 1. To compactly express the remaining probabilities, we need additional notation. First we define:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "V(C, D) = d : \u2203 w C d \u21d2 rm Dw p(d)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "for any pair of nonterminals C and D. This will be used later to 'factor out' a common term in a (potentially infinite) sum of probabilities of subderivations; the w in the expression above corresponds to a substring of the unknown input beyond the lookahead. In order to compute such values, we fix an ordering of the nonterminals by N = {C 1 , . . . , C r }, with r = |N |. We then construct a matrix M , such that M i,j = \u03c0=C i \u2192C j \u03b1 p(\u03c0). In words, we sum the probabilities of all rules that have left-hand side C i and a right-hand side beginning with C j .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A downward path in a parse tree from an occurrence of C to an occurrence of D, restricted to following always the first child, can be of any length n, including n = 0 if C = D. This means we need to obtain the matrix M * = 0\u2264n M n , and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "V(C i , C j ) = M * i,j for all i and j. Fortunately, M * i,j", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "can be effectively computed as (I \u2212 M ) \u22121 , where I is the identity matrix of size r and the superscript denotes matrix inversion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We further define:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "U(C, D) = d : C d \u21d2 rm D p(d)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "much as above, but restricting attention to unit rules. The expected number of times a handle A \u2192 A k\u22121 A k at position k \u2212 1 occurs in a right-most derivation with viable prefix A 1 \u2022 \u2022 \u2022 A k and lookahead a is now given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "E(A 1 \u2022 \u2022 \u2022 A k , a, A \u2192 A k\u22121 A k ) = S \u2020 = E 0 , . . . , E k\u22122 , F 1 , . . . , F k\u22121 = A, F, E, B, B , m : 0 \u2264 m < k \u2212 1 i: 1\u2264i\u2264m V(E i\u22121 , F i ) \u2022 p(F i \u2192 A i E i ) \u2022 V(E m , F ) \u2022 p(F \u2192 EB) \u2022 U(E, F m+1 ) \u2022 i: m<i<k\u22121 p(F i \u2192 A i E i ) \u2022 U(E i , F i+1 ) \u2022 p(F k\u22121 \u2192 A k\u22121 A k ) \u2022 V(B, B ) \u2022 p(B \u2192 a)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Note that the value above is not a probability and may exceed 1. This is because the same viable prefix may occur several times in a single rightmost derivation. At first sight, the computation of E seems to require an exponential number of steps in k. However, we can use an idea similar to that commonly used for computation of forward probabilities for HMMs (Rabiner, 1989) . We first define F:", |
|
"cite_spans": [ |
|
{ |
|
"start": 361, |
|
"end": 376, |
|
"text": "(Rabiner, 1989)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "F(\u03b5, E) = 1 if E = S \u2020 0 otherwise F(\u03b1A, E) = E ,\u03c0=F \u2192AE F(\u03b1, E ) \u2022 V(E , F ) \u2022 p(\u03c0)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This corresponds to the part of the definition of E involving A 1 , . . . , A m , E 0 , . . . , E m and F 1 , . . . , F m . We build on this by defining:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "G(\u03b1, E, B) = E ,\u03c0=F \u2192EB F(\u03b1, E ) \u2022 V(E , F ) \u2022 p(\u03c0)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "One more recursive function is needed for what was A m+1 , . . . , A k\u22122 , E m+1 , . . . , E k\u22122 and F m+1 , . . . , F k\u22122 in the earlier definition of E:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "H(\u03b5, E, B) = G(\u03b5, E, B) H(\u03b1A, E, B) = E ,\u03c0=F \u2192AE H(\u03b1, E , B) \u2022 U(E , F ) \u2022 p(\u03c0) + G(\u03b1A, E, B) E 0 F 1 A 1 E 1 E m\u22121 F m A m E m F E F m+1 A m+1 E m+1 E k\u22122 F k\u22121 A k\u22121 A k B B a Figure 1: Right-most derivation leading to F k\u22121 \u2192 A k\u22121 A k in viable prefix A 1 \u2022 \u2022 \u2022 A k with lookahead a.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Finally, we can express E in terms of these recursive functions, considering the more general case of any rule \u03c0 = F \u2192 \u03b2:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "E(\u03b1\u03b2, a, F \u2192 \u03b2) = E,B H(\u03b1, E, B) \u2022 U(E, F ) \u2022 p(\u03c0) \u2022 L(B, a) E(\u03b1, a, F \u2192 \u03b2) = 0 if \u00ac\u2203 \u03b3 \u03b1 = \u03b3\u03b2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "L(B, a) = \u03c0=B \u2192a V(B, B ) \u2022 p(\u03c0)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The expected number of times the handle is to be found to the right of \u03b1, with the stack being \u03b1 and the lookahead symbol being a, is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "E(\u03b1, a, shift) = B F(\u03b1, B) \u2022 L(B, a)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The expected number of times we see a stack \u03b1 with lookahead a is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "E(\u03b1, a) = E(\u03b1, a, shift) + \u03c0 E(\u03b1, a, \u03c0)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The probability that a reduce with rule \u03c0 is the correct action when the stack is \u03b1 and the lookahead is a is naturally E(\u03b1, a, \u03c0)/E(\u03b1, a) and the probability that a shift is the correct action is E(\u03b1, a, shift)/E(\u03b1, a). For determining the most likely action we do not need to compute E(\u03b1, a); it suffices to identify the maximum value among E(\u03b1, a, shift) and E(\u03b1, a, \u03c0) for each rule \u03c0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A deterministic shift-reduce parser can now be constructed that always chooses the most likely next action. For a given input string, the number of actions performed by this parser is linear in the input length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A call of E may lead to a number of recursive calls of F and H that is linear in the stack size and thereby in the input length. Note however that by remembering the values returned by these function between parser actions, one can ensure that each additional element pushed on the stack requires a bounded number of additional calls of the auxiliary functions. Because only linearly many elements are pushed on the stack, the time complexity becomes linear in the input length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Complexity analysis seems less favorable if we consider the number of nonterminals. The definitions of G and H each involve four nonterminals excluding the stack symbol A, so that the Therefore we have implemented an alternative that has a time complexity that is only quadratic in the size of the grammar, at the expense of a quadratic complexity in the length of the input string, as detailed in Appendix A. This is still better in practice if the number of nonterminals is much greater than the length of the input string, as in the case of the grammars we investigated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-reduce parsing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We have assumed so far that a deterministic shiftreduce parser chooses a unique next action in each configuration, an action being a shift or reduce. Implicit in this was that if the next action is a reduce, then also a unique rule is chosen. However, if we assume for now that all non-lexical rules are binary, then we can easily generalize the pars-ing algorithm to consider all possible rules whose right-hand sides match the top-most two stack elements, and postpone commitment to any of the nonterminals in the left-hand sides. This requires that stack elements now contain sets of grammar symbols. Each of these is associated with the probability of the most likely subderivation consistent with the relevant substring of the input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural determinism", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Each reduce with a binary rule is implicitly followed by zero or more reduces with unary rules. Similarly, each shift is implicitly followed by a reduce with a lexical rule and zero or more reduces with unary rules; see also (Graham et al., 1980) . This uses a precompiled table similar to U, but using maximization in place of summation, defined by:", |
|
"cite_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 246, |
|
"text": "(Graham et al., 1980)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural determinism", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "U max (C, D) = max d : C d \u21d2 rm D p(d)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural determinism", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "More concretely, configurations have the form", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural determinism", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "(Z 1 . . . Z k , v$), k \u2265 0, where each Z i (1 \u2264 i \u2264 k)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural determinism", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "is a set of pairs (A, p), where A is a nonterminal and p is a (non-zero) probability; each A occurs at most once in $) , where Z consists of all pairs (E, p) such that:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 118, |
|
"text": "$)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Structural determinism", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Z i . A shift turns (\u03b1, av$) into (\u03b1Z, v$), where Z consists of all pairs (E, p) such that p = max F U max (E, F ) \u2022 p(F \u2192 a). A gen- eralized binary reduce now turns (\u03b1Z 1 Z 2 , v$) into (\u03b1Z, v", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural determinism", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "p = max \u03c0 = F \u2192 A 1 A 2 , (A 1 , p 1 ) \u2208 Z 1 , (A 2 , p 2 ) \u2208 Z 2 U max (E, F ) \u2022 p(\u03c0) \u2022 p 1 \u2022 p 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural determinism", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We characterize this parsing procedure as structurally deterministic, as an unlabeled structure is built deterministically in the first instance. The exact choices of rules can be postponed until after reaching the end of the sentence. Then follows a straightforward process of 'backtracing', which builds the derivation that led to the computed probability associated with the start symbol.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural determinism", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The time complexity is now O(|w| \u2022 |N | 5 ) in the most straightforward implementation, but we can reduce this to quadratic in the size of the grammar provided we allow an additional factor |w| as before. For more details see Appendix B.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural determinism", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "One way to improve accuracy is to increase the size of the lookahead, beyond the current 1, comparable to the generalization from LR(1) to LR(k) parsing. The formulas are given in Appendix C.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Other variants", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Yet another variant investigates only the topmost n stack symbols when choosing the next parser action. In combination with Appendix A, this brings the time complexity down again to linear time in the length of the input string. The required changes to the formulas are given in Appendix D. There is a slight similarity to (Schuler, 2009) , in that no stack elements beyond a bounded depth are considered at each parsing step, but in our case the stack can still have arbitrary height.", |
|
"cite_spans": [ |
|
{ |
|
"start": 323, |
|
"end": 338, |
|
"text": "(Schuler, 2009)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Other variants", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Whereas we have concentrated on determinism in this paper, one can also introduce a limited degree of nondeterminism and allow some of the most promising configurations at each input position to compete, applying techniques such as beam search (Roark, 2001; Zhang and Clark, 2009; Zhu et al., 2013) , best-first search (Sagae and Lavie, 2006) , or A * search (Klein and Manning, 2003) in order to keep the running time low. For comparing different configurations, one would need to multiply the values E(\u03b1, a) as in Section 3 by the probabilities of the subderivations associated with occurrences of grammar symbols in stack \u03b1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 244, |
|
"end": 257, |
|
"text": "(Roark, 2001;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 280, |
|
"text": "Zhang and Clark, 2009;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 298, |
|
"text": "Zhu et al., 2013)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 319, |
|
"end": 342, |
|
"text": "(Sagae and Lavie, 2006)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 359, |
|
"end": 384, |
|
"text": "(Klein and Manning, 2003)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Other variants", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Further variants are obtained by replacing the parsing strategy. One obvious candidate is leftcorner parsing (Rosenkrantz and Lewis II, 1970) , which is considerably simpler than LR parsing. The resulting algorithm would be very different from the left-corner models of e.g. (Henderson, 2003) , which rely on neural networks instead of PCFGs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 141, |
|
"text": "(Rosenkrantz and Lewis II, 1970)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 292, |
|
"text": "(Henderson, 2003)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Other variants", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We used the WSJ treebank from OntoNotes 4.0 (Hovy et al., 2006) , with Sections 2-21 for training and the 2228 sentences of up to 40 words from Section 23 for testing. Grammars with different sizes, and in the required binary form, were extracted by using the tools from the Berkeley parser (Petrov et al., 2006) , with between 1 and 6 splitmerge cycles. These tools offer a framework for handling unknown words, which we have adopted.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 63, |
|
"text": "(Hovy et al., 2006)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 312, |
|
"text": "(Petrov et al., 2006)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The implementation of the parsing algorithms is in C++, running on a desktop with four 3.1GHz Intel Core i5 CPUs. The main algorithm is that of Appendix C, with lookahead k between 1 and 3, also in combination with structural determinism (Appendix B), which is indicated here by sd. The variant that consults the stack down to bounded depth n (Appendix D) will only be reported for k = 1 and n = 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Bracketing recall, precision and F-measure, are computed using evalb, with settings as in (Collins, 1997) , except that punctuation was deleted. 1 Table 1 reports results. A nonterminal B in the stack may occur in a small number of rules of the form A \u2192 BC. The C of one such rule is needed next in order to allow a reduction. If future input does not deliver this C, then parsing may fail. This problem becomes more severe as nonterminals become more specific, which is what happens with an increase of the number of split-merge cycles. Even more failures are introduced by removing the ability to consult the complete stack, which explains the poor results in the case of k = 1, n = 5; lower values of n lead to even more failures, and higher values further increase the running time. That the running time exceeds that of k = 1 is explained by the fact that with the variant from Appendix D, every pop or push requires a complete recomputation of all function values.", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 105, |
|
"text": "(Collins, 1997)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 171, |
|
"text": "Table 1 reports results.", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Parse failures can be almost completely eliminated however by choosing higher values of k and by using structural determinism. A combination thereof leads to high accuracy, not far below that of the Viterbi parses. Note that one cannot expect the accuracy of our deterministic parsers to exceed that of Viterbi parses. Both rely on the same model (a PCFG), but the first is forced to make local decisions without access to the input string that follows the bounded lookahead.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have shown that deterministic parsers can be constructed from a given PCFG. Much of the accuracy of the grammar can be retained by choosing a large lookahead in combination with 'structural determinism', which postpones commitment to nonterminals until the end of the input is reached.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Parsers of this nature potentially run in linear time in the length of the input, but our parsers are better implemented to run in quadratic time. In terms of the grammar size, the experiments suggest that the number of rules is the dominating factor. The size of the lookahead strongly affects running time. The extra time costs of structural determinism are compensated by an increase in accuracy and a sharp decrease of the parse failures. There are many advantages over other approaches to deterministic parsing that rely on general-purpose classifiers. First, some state-ofthe-art language models are readily available as PCFGs. Second, most classifiers require treebanks, whereas our algorithms are also applicable to PCFGs that were obtained in any other way, for example through intersection of language models. Lastly, our algorithms fit within well understood automata theory.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Acknowledgments We thank the reviewers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The following are the formulas that correspond to the first implemented variant. Relative to Section 3, some auxiliary functions are broken up, and associating the lookahead a with an appropriate nonterminal B is now done in G:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Formulas for quadratic time complexity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "F(\u03b5, E) = 1 if E = S \u2020 0 otherwise F(\u03b1A, E) = \u03c0=F \u2192AE F (\u03b1, F ) \u2022 p(\u03c0) F (\u03b1, F ) = E F(\u03b1, E) \u2022 V(E, F ) G(\u03b1, E, a) = F F (\u03b1, F ) \u2022 G (F, E, a) G (F, E, a) = \u03c0=F \u2192EB p(\u03c0) \u2022 L(B, a) H(\u03b5, E, a) = G(\u03b5, E, a) H(\u03b1A, E, a) = \u03c0=F \u2192AE H (\u03b1, F, a) \u2022 p(\u03c0) + G(\u03b1A, E, a) H (\u03b1, F, a) = E H(\u03b1, E, a) \u2022 U(E, F ) E(\u03b1\u03b2, a, F \u2192 \u03b2) = H (\u03b1, F, a) \u2022 p(F \u2192 \u03b2) E(\u03b1, a, F \u2192 \u03b2) = 0 if \u00ac\u2203 \u03b3 \u03b1 = \u03b3\u03b2 E(\u03b1A, a, shift) = G(\u03b1, A, a) E(\u03b5, a, shift) = L(S \u2020 , a)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Formulas for quadratic time complexity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These equations correspond to a time complexity of O(|w| 2 \u2022 |N | 2 + |w| \u2022 |P |). Each definition except that of G involves one stack (of linear size) and, at most, one terminal plus two arbitrary nonterminals. The full grammar is only considered once for every input position, in the definition of G .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Formulas for quadratic time complexity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The values are stored as vectors and matrices. For example, for each distinct lookahead symbol a, there is a (sparse) matrix containing the value of G (F, E, a) at a row and a column uniquely identified by F and E, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Formulas for quadratic time complexity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the variant from Section 4, we need to change only two definitions of auxiliary functions: The only actions are shift and generalized binary reduce red . The definition of E becomes: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Formulas for structural determinism", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "F(\u03b1Z, E) = (A,p)\u2208Z,\u03c0=F \u2192AE F (\u03b1, F ) \u2022 p(\u03c0) \u2022 p H(\u03b1Z,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Formulas for structural determinism", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "E(\u03b1Z 1 Z 2 , a, red ) = (A 1 ,p 1 )\u2208Z 1 ,(A 2 ,p 2 )\u2208Z 2 \u03c0=F \u2192A 1 A 2 H (\u03b1, F, a) \u2022 p(\u03c0) \u2022 p 1 \u2022 p 2 E(\u03b1Z,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Formulas for structural determinism", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to handle k symbols of lookahead (Section 5) some technical problems are best avoided by having k copies of the end-of-sentence marker appended behind the input string, with a corresponding augmentation of the grammar. We generalize L(B, v) to be the sum of p( If I is given for all prefixes of a fixed lookahead string of length k (this requires cubic time in k), we can compute L in linear time for all suffixes of the same string:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Formulas for larger lookahead", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "L(B, v) = B V(B, B ) \u2022 L (B , v) L (B, v) = \u03c0=B\u2192B 1 B 2 ,v 1 ,v 2 : v=v 1 v 2 ,1\u2264|v 1 |,1\u2264|v 2 | p(\u03c0) \u2022 I(B 1 , v 1 ) \u2022 L(B 2 , v 2 ) if |v| > 1 L (B, a) = \u03c0=B\u2192a p(\u03c0)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Formulas for larger lookahead", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The function H is generalized straightforwardly by letting it pass on a string v (1 \u2264 |v| \u2264 k) instead of a single terminal a. The same holds for E. The function G requires a slightly bigger modification, leading back to H if not all of the lookahead has been matched yet:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Formulas for larger lookahead", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "G(\u03b1, E, v) = F F (\u03b1, F ) \u2022 G (F, E, v) + F,v 1 ,v 2 :v=v 1 v 2 ,|v 2 |>0 H (\u03b1, F, v 2 ) \u2022 G (F, E, v 1 ) G (F, E, v) = \u03c0=F \u2192EB p(\u03c0) \u2022 L(B, v) G (F, E, v) = \u03c0=F \u2192EB p(\u03c0) \u2022 I(B, v)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Formulas for larger lookahead", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The time complexity is now O(k \u2022 |w|", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Formulas for larger lookahead", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2 \u2022 |N | 2 + k 3 \u2022 |w| \u2022 |P |).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Formulas for larger lookahead", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As discussed in Section 5, we want to predict the next parser action without consulting any symbols in \u03b1, when the current stack is \u03b1\u03b2, with |\u03b2| = n. This is achieved by approximating F(\u03b1, E) by the outside value of E, that is, the sum of p(d) ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Investigation of top-most n stack symbols only", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for all d such that \u2203 \u03b1,w S d \u21d2 rm \u03b1Ew. Similarly, H (\u03b1, F, v", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Investigation of top-most n stack symbols only", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Evalb otherwise stumbles over e.g. a part of speech consisting of two single quotes in the parsed file, against a part of speech 'POS' in the gold file, for an input token consisting of a single quote.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Parsing, volume 1 of The Theory of Parsing, Translation and Compiling", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Aho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Ullman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1972, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A.V. Aho and J.D. Ullman. 1972. Parsing, volume 1 of The Theory of Parsing, Translation and Compiling. Prentice-Hall, Englewood Cliffs, N.J.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The structure of shared forests in ambiguous parsing", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Billot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Lang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "27th Annual Meeting of the ACL, Proceedings of the Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "143--151", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Billot and B. Lang. 1989. The structure of shared forests in ambiguous parsing. In 27th An- nual Meeting of the ACL, Proceedings of the Confer- ence, pages 143-151, Vancouver, British Columbia, Canada, June.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Generalized probabilistic LR parsing of natural language (corpora) with unification-based grammars", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "1", |
|
"pages": "25--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Briscoe and J. Carroll. 1993. Generalized prob- abilistic LR parsing of natural language (corpora) with unification-based grammars. Computational Linguistics, 19(1):25-59.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Benchmarking of statistical dependency parsers for French", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Candito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Henestroza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "The 23rd International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "108--116", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Candito, J. Nivre, P. Denis, and E. Henestroza An- guiano. 2010. Benchmarking of statistical de- pendency parsers for French. In The 23rd Inter- national Conference on Computational Linguistics, pages 108-116, Beijing, China, August.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Parsing to Stanford dependencies: Trade-offs between speed and accuracy", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-C", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "LREC 2010: Seventh International Conference on Language Resources and Evaluation, Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1628--1632", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Cer, M.-C. de Marneffe, D. Jurafsky, and C. Man- ning. 2010. Parsing to Stanford dependen- cies: Trade-offs between speed and accuracy. In LREC 2010: Seventh International Conference on Language Resources and Evaluation, Proceedings, pages 1628-1632, Valletta , Malta, May.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Three generative, lexicalised models for statistical parsing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "35th Annual Meeting of the ACL, Proceedings of the Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "16--23", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Collins. 1997. Three generative, lexicalised models for statistical parsing. In 35th Annual Meeting of the ACL, Proceedings of the Conference, pages 16-23, Madrid, Spain, July.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "An improved context-free recognizer", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Harrison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Ruzzo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "ACM Transactions on Programming Languages and Systems", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "415--462", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S.L. Graham, M.A. Harrison, and W.L. Ruzzo. 1980. An improved context-free recognizer. ACM Trans- actions on Programming Languages and Systems, 2:415-462.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Generative versus discriminative models for statistical left-corner parsing", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "8th International Workshop on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "115--126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Henderson. 2003. Generative versus discrimina- tive models for statistical left-corner parsing. In 8th International Workshop on Parsing Technolo- gies, pages 115-126, LORIA, Nancy, France, April.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "OntoNotes: The 90% solution", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Hovy, M. Marcus, M. Palmer, L. Ramshaw, and R. Weischedel. 2006. OntoNotes: The 90% solu- tion. In Proceedings of the Human Language Tech- nology Conference of the NAACL, Main Conference, pages 57-60, New York, USA, June.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Dynamic programming for linear-time incremental parsing", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sagae", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1077--1086", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Huang and K. Sagae. 2010. Dynamic programming for linear-time incremental parsing. In Proceedings of the 48th Annual Meeting of the ACL, pages 1077- 1086, Uppsala, Sweden, July.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Computation of the probability of initial substring generation by stochastic context-free grammars", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Computational Linguistics", |
|
"volume": "17", |
|
"issue": "3", |
|
"pages": "315--323", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Jelinek and J.D. Lafferty. 1991. Computation of the probability of initial substring generation by stochastic context-free grammars. Computational Linguistics, 17(3):315-323.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Induction of greedy controllers for deterministic treebank parsers", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kalt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "17--24", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Kalt. 2004. Induction of greedy controllers for de- terministic treebank parsers. In Conference on Em- pirical Methods in Natural Language Processing, pages 17-24, Barcelona, Spain, July.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A * parsing: Fast exact Viterbi parse selection", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "40--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Klein and C.D. Manning. 2003. A * parsing: Fast exact Viterbi parse selection. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the ACL, pages 40- 47, Edmonton, Canada, May-June.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "On the translation of languages from left to right", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Knuth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1965, |
|
"venue": "Information and Control", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "607--639", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D.E. Knuth. 1965. On the translation of languages from left to right. Information and Control, 8:607- 639.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Dynamic programming algorithms for transition-based dependency parsers", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kuhlmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "G\u00f3mez-Rodr\u00edguez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Satta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "49th Annual Meeting of the ACL, Proceedings of the Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "673--682", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Kuhlmann, C. G\u00f3mez-Rodr\u00edguez, and G. Satta. 2011. Dynamic programming algorithms for transition-based dependency parsers. In 49th An- nual Meeting of the ACL, Proceedings of the Con- ference, pages 673-682, Portland, Oregon, June.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "An empirical comparison of generalized LR tables", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lankhorst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Tomita's Algorithm: Extensions and Applications, Proc. of the first Twente Workshop on Language Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "87--93", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Lankhorst. 1991. An empirical comparison of gen- eralized LR tables. In R. Heemels, A. Nijholt, and K. Sikkel, editors, Tomita's Algorithm: Extensions and Applications, Proc. of the first Twente Work- shop on Language Technology, pages 87-93. Uni- versity of Twente, September.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "GLR * -an efficient noise-skipping parsing algorithm for context free grammars", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Tomita", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Third International Workshop on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "123--134", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Lavie and M. Tomita. 1993. GLR * -an effi- cient noise-skipping parsing algorithm for context free grammars. In Third International Workshop on Parsing Technologies, pages 123-134, Tilburg (The Netherlands) and Durbuy (Belgium), August.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "An alternative method of training probabilistic LR parsers", |
|
"authors": [ |
|
{ |
|
"first": "M.-J", |
|
"middle": [], |
|
"last": "Nederhof", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Satta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "42nd Annual Meeting of the ACL, Proceedings of the Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "551--558", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M.-J. Nederhof and G. Satta. 2004. An alternative method of training probabilistic LR parsers. In 42nd Annual Meeting of the ACL, Proceedings of the Con- ference, pages 551-558, Barcelona, Spain, July.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Probabilistic LR parsing for general context-free grammars", |
|
"authors": [ |
|
{ |
|
"first": "S.-K", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Tomita", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proc. of the Second International Workshop on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "154--163", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S.-K. Ng and M. Tomita. 1991. Probabilistic LR pars- ing for general context-free grammars. In Proc. of the Second International Workshop on Parsing Tech- nologies, pages 154-163, Cancun, Mexico, Febru- ary.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Algorithms for deterministic incremental dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Computational Linguistics", |
|
"volume": "34", |
|
"issue": "4", |
|
"pages": "513--553", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Nivre. 2008. Algorithms for deterministic incremen- tal dependency parsing. Computational Linguistics, 34(4):513-553.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Learning accurate, compact, and interpretable tree annotation", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Barrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Thibaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "433--440", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Petrov, L. Barrett, R. Thibaux, and D. Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 433-440, Sydney, Australia, July.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A tutorial on hidden Markov models and selected applications in speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Rabiner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the IEEE", |
|
"volume": "77", |
|
"issue": "2", |
|
"pages": "257--286", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L.R. Rabiner. 1989. A tutorial on hidden Markov mod- els and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257-286, February.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "A linear observed time statistical parser based on maximum entropy models", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ratnaparkhi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the Second Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Ratnaparkhi. 1997. A linear observed time statis- tical parser based on maximum entropy models. In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, pages 1- 10, Providence, Rhode Island, USA, August.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Probabilistic top-down parsing and language modeling", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Computational Linguistics", |
|
"volume": "27", |
|
"issue": "2", |
|
"pages": "249--276", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249-276.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Deterministic left corner parsing", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Rosenkrantz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1970, |
|
"venue": "IEEE Conference Record of the 11th Annual Symposium on Switching and Automata Theory", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "139--152", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D.J. Rosenkrantz and P.M. Lewis II. 1970. Determin- istic left corner parsing. In IEEE Conference Record of the 11th Annual Symposium on Switching and Au- tomata Theory, pages 139-152.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A classifier-based parser with linear run-time complexity", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sagae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Ninth International Workshop on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "125--132", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Sagae and A. Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceed- ings of the Ninth International Workshop on Parsing Technologies, pages 125-132, Vancouver, British Columbia, Canada, October.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A best-first probabilistic shift-reduce parser", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sagae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "691--698", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Sagae and A. Lavie. 2006. A best-first probabilistic shift-reduce parser. In Proceedings of the 21st Inter- national Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 691- 698, Sydney, Australia, July.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Positive results for parsing with a bounded stack using a model-based right-corner transform", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Schuler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "344--352", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. Schuler. 2009. Positive results for parsing with a bounded stack using a model-based right-corner transform. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the ACL, pages 344- 352, Boulder, Colorado, May-June.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Experiments with GLR and chart parsing", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Shann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Generalized LR Parsing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "17--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Shann. 1991. Experiments with GLR and chart pars- ing. In M. Tomita, editor, Generalized LR Parsing, chapter 2, pages 17-34. Kluwer Academic Publish- ers.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Sentence disambiguation by a shift-reduce parsing technique", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Shieber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "21st Annual Meeting of the ACL, Proceedings of the Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "113--118", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S.M. Shieber. 1983. Sentence disambiguation by a shift-reduce parsing technique. In 21st Annual Meeting of the ACL, Proceedings of the Conference, pages 113-118, Cambridge, Massachusetts, July.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "LR(k) and LL(k) Parsing", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sippu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Soisalon-Soininen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "EATCS Monographs on Theoretical Computer Science", |
|
"volume": "II", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Sippu and E. Soisalon-Soininen. 1990. Parsing The- ory, Vol. II: LR(k) and LL(k) Parsing, volume 20 of EATCS Monographs on Theoretical Computer Sci- ence. Springer-Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "An efficient probabilistic contextfree parsing algorithm that computes prefix probabilities", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Computational Linguistics", |
|
"volume": "21", |
|
"issue": "2", |
|
"pages": "167--201", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Stolcke. 1995. An efficient probabilistic context- free parsing algorithm that computes prefix proba- bilities. Computational Linguistics, 21(2):167-201.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Graph-structured stack and natural language parsing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Tomita", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "26th Annual Meeting of the ACL, Proceedings of the Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "249--257", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Tomita. 1988. Graph-structured stack and natu- ral language parsing. In 26th Annual Meeting of the ACL, Proceedings of the Conference, pages 249- 257, Buffalo, New York, June.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Chunk parsing revisited", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Tsuruoka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Ninth International Workshop on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "133--140", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Tsuruoka and J. Tsujii. 2005. Chunk parsing re- visited. In Proceedings of the Ninth International Workshop on Parsing Technologies, pages 133-140, Vancouver, British Columbia, Canada, October.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Learning a lightweight robust deterministic parser", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Wong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Sixth European Conference on Speech Communication and Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2047--2050", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Wong and D. Wu. 1999. Learning a lightweight robust deterministic parser. In Sixth European Con- ference on Speech Communication and Technology, pages 2047-2050.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Statistical dependency analysis with support vector machines", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "8th International Workshop on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "195--206", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Yamada and Y. Matsumoto. 2003. Statistical de- pendency analysis with support vector machines. In 8th International Workshop on Parsing Technolo- gies, pages 195-206, LORIA, Nancy, France, April.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Recognition and parsing of context-free languages in time n 3", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Younger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "Information and Control", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "189--208", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D.H. Younger. 1967. Recognition and parsing of context-free languages in time n 3 . Information and Control, 10:189-208.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Transition-based parsing of the Chinese treebank using a global discriminative model", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 11th International Conference on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "162--171", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Zhang and S. Clark. 2009. Transition-based pars- ing of the Chinese treebank using a global discrimi- native model. In Proceedings of the 11th Interna- tional Conference on Parsing Technologies, pages 162-171, Paris, France, October.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Fast and accurate shift-reduce constituent parsing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "51st Annual Meeting of the ACL, Proceedings of the Conference", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "434--443", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Zhu, Y. Zhang, W. Chen, M. Zhang, and J. Zhu. 2013. Fast and accurate shift-reduce constituent parsing. In 51st Annual Meeting of the ACL, Pro- ceedings of the Conference, volume 1, pages 434- 443, Sofia, Bulgaria, August.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "time complexity is O(|w| \u2022 |N | 4 ), where |w| is the length of the input w. A finer analysis gives O(|w| \u2022 (|N | \u2022 |P | + |N | 2 \u2022 P )), where P is the maximum for all A of the number of rules of the form F \u2192 AE. By splitting up G and H into smaller functions, we obtain complexity O(|w| \u2022 |N | 3 ), which can still be prohibitive.", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "E, a) = (A,p)\u2208Z,\u03c0=F \u2192AE H (\u03b1, F, a) \u2022 p(\u03c0) \u2022 p + G(\u03b1Z, E, a)", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "a, shift) = (A,p)\u2208Z G(\u03b1, A, a) \u2022 p The time complexity now increases to O(|w| 2 \u2022 (|N | 2 + |P |)) due to the new H.", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "d) for all d such that B d \u21d2 rm vx, some x. We let I(B, v) be the sum of p(d) for all d such that B d \u21d2 rm v.", |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": ") is approximated by E G(\u03b1, E, v) \u2022 W(E, F ) where: W(C, D) = d : \u2203 \u03b4 C d \u21d2 rm \u03b4D p(d)The time complexity (with lookahead k) is nowO(k \u2022 n \u2022 |w| \u2022 |N | 2 + k 3 \u2022 |w| \u2022 |P |).", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">time fail</td><td>R</td><td>P</td><td>F1</td><td/><td colspan=\"2\">time fail</td><td>R</td><td>P</td><td>F1</td></tr><tr><td colspan=\"3\">1-split-merge (12,059 rules)</td><td/><td/><td/><td colspan=\"3\">4-split-merge (269,162 rules)</td></tr><tr><td>k = 1</td><td>43</td><td colspan=\"4\">11 67.20 66.67 66.94</td><td>k = 1</td><td colspan=\"2\">870 115 75.69 73.30 74.48</td></tr><tr><td>k = 2</td><td>99</td><td colspan=\"4\">0 70.74 71.01 70.88</td><td>k = 2</td><td>2,257</td><td>1 83.48 82.35 82.91</td></tr><tr><td>k = 3</td><td>199</td><td colspan=\"4\">0 71.41 71.85 71.63</td><td>k = 3</td><td>4,380</td><td>1 84.95 84.06 84.51</td></tr><tr><td>k = 1, sd</td><td>62</td><td colspan=\"4\">0 68.12 68.52 68.32</td><td>k = 1, sd</td><td>2,336</td><td>1 80.82 80.65 80.74</td></tr><tr><td>k = 2, sd</td><td>135</td><td colspan=\"4\">0 70.98 71.72 71.35</td><td>k = 2, sd</td><td>4,747</td><td>0 85.52 85.64 85.58</td></tr><tr><td>k = 3, sd</td><td>253</td><td colspan=\"4\">0 71.31 72.50 71.90</td><td>k = 3, sd</td><td>7,728</td><td>0 86.62 86.82 86.72</td></tr><tr><td>k = 1, n = 5</td><td colspan=\"5\">56 170 66.19 65.67 65.93</td><td>k = 1, n = 5</td><td colspan=\"2\">1,152 508 76.21 73.92 75.05</td></tr><tr><td>Viterbi</td><td/><td colspan=\"4\">0 72.45 74.55 73.49</td><td>Viterbi</td><td/><td>0 87.95 88.10 88.02</td></tr><tr><td colspan=\"3\">2-split-merge (32,994 rules)</td><td/><td/><td/><td colspan=\"3\">5-split-merge (716,575 rules)</td></tr><tr><td>k = 1</td><td>120</td><td colspan=\"4\">33 72.65 70.50 71.56</td><td>k = 1</td><td colspan=\"2\">3,166 172 76.17 73.44 74.78</td></tr><tr><td>k = 2</td><td>275</td><td colspan=\"4\">1 78.44 77.26 77.84</td><td>k = 2</td><td>7,476</td><td>2 84.14 82.80 83.46</td></tr><tr><td>k = 3</td><td>568</td><td colspan=\"4\">0 79.81 79.27 79.54</td><td>k = 3</td><td>14,231</td><td>1 86.05 85.24 85.64</td></tr><tr><td>k = 1, sd</td><td>196</td><td colspan=\"4\">0 74.78 74.96 74.87</td><td>k = 1, sd</td><td>7,427</td><td>1 81.99 81.44 81.72</td></tr><tr><td>k = 2, sd</td><td>439</td><td colspan=\"4\">0 79.96 80.40 80.18</td><td>k = 2, sd</td><td>14,587</td><td>0 86.89 87.00 86.95</td></tr><tr><td>k = 3, sd</td><td>770</td><td colspan=\"4\">0 80.49 81.20 80.85</td><td>k = 3, sd</td><td>24,553</td><td>0 87.67 87.82 87.74</td></tr><tr><td>k = 1, n = 5</td><td colspan=\"5\">146 247 72.27 70.34 71.29</td><td>k = 1, n = 5</td><td colspan=\"2\">4,572 559 77.65 75.13 76.37</td></tr><tr><td>Viterbi</td><td/><td colspan=\"4\">0 82.16 82.69 82.43</td><td>Viterbi</td><td/><td>0 88.65 89.00 88.83</td></tr><tr><td colspan=\"3\">3-split-merge (95,647 rules)</td><td/><td/><td/><td colspan=\"3\">6-split-merge (1,947,915 rules)</td></tr><tr><td>k = 1</td><td>305</td><td colspan=\"4\">75 74.39 72.33 73.35</td><td>k = 1</td><td colspan=\"2\">7,741 274 76.60 74.08 75.32</td></tr><tr><td>k = 2</td><td>770</td><td colspan=\"4\">3 81.32 80.35 80.83</td><td>k = 2</td><td>19,440</td><td>5 84.60 83.17 83.88</td></tr><tr><td>k = 3</td><td>1,596</td><td colspan=\"4\">0 82.78 82.35 82.56</td><td>k = 3</td><td>35,712</td><td>0 86.02 85.07 85.54</td></tr><tr><td>k = 1, sd</td><td>757</td><td colspan=\"4\">0 78.11 78.37 78.24</td><td>k = 1, sd</td><td>19,530</td><td>1 82.64 81.95 82.29</td></tr><tr><td>k = 2, sd</td><td>1,531</td><td colspan=\"4\">0 82.85 83.39 83.12</td><td>k = 2, sd</td><td>39,615</td><td>0 87.36 87.20 87.28</td></tr><tr><td>k = 3, sd</td><td>2,595</td><td colspan=\"4\">0 83.66 84.25 83.96</td><td>k = 3, sd</td><td>64,906</td><td>0 88.16 88.26 88.21</td></tr><tr><td>k = 1, n = 5</td><td colspan=\"5\">404 401 74.52 72.39 73.44</td><td colspan=\"3\">k = 1, n = 5 10,897 652 77.89 75.57 76.71</td></tr><tr><td>Viterbi</td><td/><td colspan=\"4\">0 85.38 86.03 85.71</td><td>Viterbi</td><td/><td>0 88.69 88.99 88.84</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Total time required (seconds), number of parse failures, recall, precision, F-measure, for deterministic parsing, compared to the Viterbi parses as computed with the Berkeley parser." |
|
} |
|
} |
|
} |
|
} |