Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q13-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:08:15.642313Z"
},
"title": "Efficient Stacked Dependency Parsing by Forest Reranking",
"authors": [
{
"first": "Katsuhiko",
"middle": [],
"last": "Hayashi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {
"postCode": "8916-5, 630-0192",
"settlement": "Takayama, Ikoma",
"region": "Nara",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Shuhei",
"middle": [],
"last": "Kondo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {
"postCode": "8916-5, 630-0192",
"settlement": "Takayama, Ikoma",
"region": "Nara",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {
"postCode": "8916-5, 630-0192",
"settlement": "Takayama, Ikoma",
"region": "Nara",
"country": "Japan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes a discriminative forest reranking algorithm for dependency parsing that can be seen as a form of efficient stacked parsing. A dynamic programming shift-reduce parser produces a packed derivation forest which is then scored by a discriminative reranker, using the 1-best tree output by the shift-reduce parser as guide features in addition to third-order graph-based features. To improve efficiency and accuracy, this paper also proposes a novel shift-reduce parser that eliminates the spurious ambiguity of arcstandard transition systems. Testing on the English Penn Treebank data, forest reranking gave a state-of-the-art unlabeled dependency accuracy of 93.12.",
"pdf_parse": {
"paper_id": "Q13-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes a discriminative forest reranking algorithm for dependency parsing that can be seen as a form of efficient stacked parsing. A dynamic programming shift-reduce parser produces a packed derivation forest which is then scored by a discriminative reranker, using the 1-best tree output by the shift-reduce parser as guide features in addition to third-order graph-based features. To improve efficiency and accuracy, this paper also proposes a novel shift-reduce parser that eliminates the spurious ambiguity of arcstandard transition systems. Testing on the English Penn Treebank data, forest reranking gave a state-of-the-art unlabeled dependency accuracy of 93.12.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "There are two main approaches of data-driven dependency parsing -one is graph-based and the other is transition-based.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the graph-based approach, global optimization algorithms find the highest-scoring tree with locally factored models (McDonald et al., 2005) . While third-order graph-based models achieve stateof-the-art accuracy, it has O(n 4 ) time complexity for a sentence of length n. Recently, some pruning techniques have been proposed to improve the efficiency of third-order models (Rush and Petrov, 2012; Zhang and McDonald, 2012) .",
"cite_spans": [
{
"start": 119,
"end": 142,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF22"
},
{
"start": 376,
"end": 399,
"text": "(Rush and Petrov, 2012;",
"ref_id": "BIBREF26"
},
{
"start": 400,
"end": 425,
"text": "Zhang and McDonald, 2012)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The transition-based approach usually employs the shift-reduce parsing algorithm with linear-time complexity (Nivre, 2008) . It greedily chooses the transition with the highest score and the resulting transition sequence is not always globally optimal. The beam search algorithm improves parsing flexibility in deterministic parsing (Zhang and Clark, 2008; Zhang and Nivre, 2011) , and dynamic programming makes beam search more efficient (Huang and Sagae, 2010) .",
"cite_spans": [
{
"start": 109,
"end": 122,
"text": "(Nivre, 2008)",
"ref_id": "BIBREF24"
},
{
"start": 333,
"end": 356,
"text": "(Zhang and Clark, 2008;",
"ref_id": "BIBREF32"
},
{
"start": 357,
"end": 379,
"text": "Zhang and Nivre, 2011)",
"ref_id": "BIBREF34"
},
{
"start": 439,
"end": 462,
"text": "(Huang and Sagae, 2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is also an alternative approach that integrates graph-based and transition-based models (Sagae and Lavie, 2006; Zhang and Clark, 2008; Nivre and McDonald, 2008; Martins et al., 2008) . Martins et al. (2008) formulated their approach as stacking of parsers where the output of the first-stage parser is provided to the second as guide features. In particular, they used a transition-based parser for the first stage and a graph-based parser for the second stage. The main drawback of this approach is that the efficiency of the transition-based parser is sacrificed because the second-stage employs full parsing.",
"cite_spans": [
{
"start": 94,
"end": 117,
"text": "(Sagae and Lavie, 2006;",
"ref_id": "BIBREF27"
},
{
"start": 118,
"end": 140,
"text": "Zhang and Clark, 2008;",
"ref_id": "BIBREF32"
},
{
"start": 141,
"end": 166,
"text": "Nivre and McDonald, 2008;",
"ref_id": "BIBREF23"
},
{
"start": 167,
"end": 188,
"text": "Martins et al., 2008)",
"ref_id": "BIBREF20"
},
{
"start": 191,
"end": 212,
"text": "Martins et al. (2008)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper proposes an efficient stacked parsing method through discriminative reranking with higher-order graph-based features, which works on the forests output by the first-stage dynamic programming shift-reduce parser and integrates nonlocal features efficiently with cube-pruning (Huang and Chiang, 2007) . The advantages of our method are as follows:",
"cite_spans": [
{
"start": 285,
"end": 309,
"text": "(Huang and Chiang, 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Unlike the conventional stacking approach, the first-stage shift-reduce parser prunes the search space of the second-stage graph-based parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 In addition to guide features, the second-stage graph-based parser can employ the scores of the first-stage parser which cannot be incorpo- i < n reduce \u21b6 : Figure 1 : The arc-standard transition-based dependency parsing system with dynamic programming: means \"take anything\". a \u21b7 b denotes that a tree b is attached to a tree a.",
"cite_spans": [],
"ref_spans": [
{
"start": 159,
"end": 167,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "state p : (i, j, s \u2032 d |s \u2032 d\u22121 | . . . |s \u2032 1 |s \u2032 0 ) : \u03c0 \u2032 state q \u2113 : (j, k, s d |s d\u22121 | . . . |s 1 |s 0 ) : \u03c0 \u2113 + 1 : (i, k, s \u2032 d |s \u2032 d\u22121 | . . . |s \u2032 1 |s \u2032 0 \u21b6 s 0 ) : \u03c0 \u2032 s \u2032 0 .h.w \u0338 = w 0 \u2227 p \u2208 \u03c0 reduce \u21b7 : state p : (i, j, s \u2032 d |s \u2032 d\u22121 | . . . |s \u2032 1 |s \u2032 0 ) : \u03c0 \u2032 state q \u2113 : (j, k, s d |s d\u22121 | . . . |s 1 |s 0 ) : \u03c0 \u2113 + 1 : (i, k, s \u2032 d |s \u2032 d\u22121 | . . . |s \u2032 1 |s \u2032 0 \u21b7 s 0 ) : \u03c0 \u2032 p \u2208 \u03c0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "rated in standard graph-based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 In contrast to joint transition-based/graphbased approaches (Zhang and Clark, 2008; Bohnet and Kuhn, 2012) which require a large beam size and make dynamic programming impractical, our two-stage approach can integrate both models with little loss of efficiency.",
"cite_spans": [
{
"start": 62,
"end": 85,
"text": "(Zhang and Clark, 2008;",
"ref_id": "BIBREF32"
},
{
"start": 86,
"end": 108,
"text": "Bohnet and Kuhn, 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition, the elimination of spurious ambiguity from the arc-standard shift-reduce parser improves the efficiency and accuracy of our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use a beam search shift-reduce parser with dynamic programming as our baseline system. Figure 1 shows it as a deductive system (Shieber et al., 1995) . A state is defined as the following:",
"cite_spans": [
{
"start": 130,
"end": 152,
"text": "(Shieber et al., 1995)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 90,
"end": 96,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing",
"sec_num": "2"
},
{
"text": "\u2113 : (i, j, s d |s d\u22121 | . . . |s 1 |s 0 ) : \u03c0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing",
"sec_num": "2"
},
{
"text": "where \u2113 is the step size, [i, j] is the span of the topmost stack element s 0 , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing",
"sec_num": "2"
},
{
"text": "s d |s d\u22121 | . . . |s 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing",
"sec_num": "2"
},
{
"text": "shows a stack with d elements at the top, where d is the window size used for defining features. The axiom is initialized with an input sentence of length n, x = w 0 . . . w n where w 0 is a special root symbol $ 0 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing",
"sec_num": "2"
},
{
"text": "The system takes 2n steps for a complete analysis. \u03c0 is a set of pointers to the predictor states, each of which is the state just before shifting the root word Table 1 : Additional feature templates for shift-reduce parsers: q denotes input queue. h, lc and rc are head, leftmost child and rightmost child of a stack element s. lc2 and rc2 denote the second leftmost and rightmost children. t and w are a part-of-speech (POS) tag and a word. of s 0 into stack 1 . Dynamic programming merges equivalent states in the same step if they have the same feature values. We add the feature templates shown in Table 1 to Huang and Sagae (2010) 's feature templates.",
"cite_spans": [
{
"start": 614,
"end": 636,
"text": "Huang and Sagae (2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 161,
"end": 168,
"text": "Table 1",
"ref_id": null
},
{
"start": 603,
"end": 610,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing",
"sec_num": "2"
},
{
"text": "s 0 .h.t \u2022 s 0 .lc.t \u2022 s 0 .lc2.t s 0 .h.t \u2022 s 0 .rc.t \u2022 s 0 .rc2.t s 1 .h.t \u2022 s 1 .lc.t \u2022 s 1 .lc2.t s 1 .h.t \u2022 s 1 .rc.t \u2022 s 1 .rc2.t s 0 .h.t \u2022 s 0 .lc.t \u2022 s 0 .lc2.t \u2022 q 0 .t s 0 .h.t \u2022 s 0 .rc.t \u2022 s 0 .rc2.t \u2022 q 0 .t s 0 .h.t \u2022 s 1 .h.t \u2022 q 0 .t \u2022 q 1 .t s 0 .h.w \u2022 s 1 .h.t \u2022 q 0 .t \u2022 q 1 .t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing",
"sec_num": "2"
},
{
"text": "Dynamic programming not only makes the shiftreduce parser with beam search more efficient but also produces a packed forest that encodes an exponential number of dependency trees. A packed dependency forest can be represented by a weighted (directed) hypergraph. A weighted hypergraph is a pair H = \u27e8V, E\u27e9, where V is the set of vertices and E is the set of hyperedges. Each hyperedge e \u2208 E is a tuple e = \u27e8T (e), h(e), f e \u27e9, where h(e) \u2208 V is X($) saw0, 7 Figure 2 : An example of packed dependency (derivation) forest: each vertex has information about the topmost stack element of the corresponding state to it. its head vertex, T (e) \u2208 V + is an ordered list of tail vertices, and f e is a weight for e. Figure 2 shows an example of a packed forest. Each binary hyperedge corresponds to a reduce action, and each leaf vertex corresponds to a shift action. Each vertex also corresponds to a state, and parse histories on the states can be encoded into the vertices. In the example, information about the topmost stack element is attached to the corresponding vertex marked with a non-terminal symbol X.",
"cite_spans": [],
"ref_spans": [
{
"start": 458,
"end": 466,
"text": "Figure 2",
"ref_id": null
},
{
"start": 709,
"end": 717,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing",
"sec_num": "2"
},
{
"text": "I X her (saw) 1, 7 I X her (saw) 1, 4 I X(saw) 1, 3 X(I) 1, 2 I X(saw) 2, 3 saw X(her) 3, 4 her I X her,with (saw) 1, 7 X with (her) 3, 7 X man (with) 4, 7 X(with) 4, 5 with a X(man) 5, 7 X(a) 5, 6 a X(man) 6, 7 man X($) saw0, 7 I X her (saw) 1, 7 I X her (saw) 1, 4 I X(saw) 1, 3 X(I) 1, 2 I X(saw) 2, 3 saw X(her) 3, 4 her I X her,with (saw) 1, 7 X with (her) 3, 7 X man (with) 4, 7 X(with) 4, 5 with a X(man) 5, 7 X(a) 5, 6 a X(man) 6, 7 man",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing",
"sec_num": "2"
},
{
"text": "Weights are omitted in the example. In practice, we attach each reduction weight to the corresponding hyperedge, and add the shift weight to the reduction weight when a shifted word is reduced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing",
"sec_num": "2"
},
{
"text": "One solution to remove spurious ambiguity in the arc-standard transition system is to give priority to the construction of left arcs over that of right arcs (or vice versa) like Eisner (1997) . For example, an Earley dependency parser (Hayashi et al., 2012) attaches all left dependents to a word before right dependents. The parser uses a scan action to stop the construction of left arcs. We apply this idea to the arc-standard transition system and show the resulting transition system in Figure 3 . We introduce the * symbol to indicate that the root node of the topmost element on the stack has not been scanned yet. The shift and reduce \u21b7 actions can be used only when the root of the topmost element on the stack has already been scanned, and all left arcs are always attached to the head before the head is scanned.",
"cite_spans": [
{
"start": 178,
"end": 191,
"text": "Eisner (1997)",
"ref_id": "BIBREF7"
},
{
"start": 235,
"end": 257,
"text": "(Hayashi et al., 2012)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 492,
"end": 500,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing without Spurious Ambiguity",
"sec_num": "3"
},
{
"text": "The arc-standard shift-reduce parser without spurious ambiguity takes 3n steps to finish parsing, and the additional n scan actions add surplus vertices and (unary) hyperedges to a packed forest. However, it is easy to remove them from the packed forest because the consequent state of a scan action has a unique antecedent state and all the hyperedges going out from a vertex corresponding to the consequent state can be attached to the vertex corresponding to the antecedent state. The scan weight of the removed unary hyperedge is added to each weight of the hyperedges attached to the antecedent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing without Spurious Ambiguity",
"sec_num": "3"
},
{
"text": "4 Experiments (Spurious Ambiguity vs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing without Spurious Ambiguity",
"sec_num": "3"
},
{
"text": "Non-Spurious Ambiguity)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing without Spurious Ambiguity",
"sec_num": "3"
},
{
"text": "We conducted experiments on the English Penn Treebank (PTB) data to compare spurious and nonspurious shift-reduce parsers. We split the WSJ part of PTB into sections 02-21 for training, section 22 for development, and section 23 for test. We used the head rules (Yamada and Matsumoto, 2003) to convert phrase structure to dependency structure.",
"cite_spans": [
{
"start": 262,
"end": 290,
"text": "(Yamada and Matsumoto, 2003)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing without Spurious Ambiguity",
"sec_num": "3"
},
{
"text": "axiom(c 0 ) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing without Spurious Ambiguity",
"sec_num": "3"
},
{
"text": "0 : (0, 1, w 0 ) : \u2205 goal(c 3n ) : 3n : (0, n, s 0 ) : \u2205 shift : state p \u2113 : ( , j, s d |s d\u22121 | . . . |s 1 |s 0 ) : \u2113 + 1 : (j, j + 1, s d\u22121 |s d\u22122 | . . . |s 0 |w * j ) : (p) j < n scan : \u2113 : (i, j, s d |s d\u22121 | . . . |s 1 |s * 0 ) : \u03c0 \u2113 + 1 : (i, j, s d |s d\u22121 | . . . |s 1 |s 0 ) : \u03c0 reduce \u21b6 : state p : (i, j, s \u2032 d |s \u2032 d\u22121 | . . . |s \u2032 0 |s \u2032 0 ) : \u03c0 \u2032 state q \u2113 : (j, k, s d |s d\u22121 | . . . |s 1 |s * 0 ) : \u03c0 \u2113 + 1 : (i, k, s \u2032 d |s \u2032 d\u22121 | . . . |s \u2032 1 |s \u2032 0 \u21b6 s * 0 ) : \u03c0 \u2032 s \u2032 0 .h.w \u0338 = w 0 \u2227 p \u2208 \u03c0 reduce \u21b7 : state p : (i, j, s \u2032 d |s \u2032 d\u22121 | . . . |s \u2032 1 |s \u2032 0 ) : \u03c0 \u2032 state q \u2113 : (j, k, s d |s d\u22121 | . . . |s 1 |s 0 ) : \u03c0 \u2113 + 1 : (i, k, s \u2032 d |s \u2032 d\u22121 | . . . |s \u2032 1 |s \u2032 0 \u21b7 s 0 ) : \u03c0 \u2032 p \u2208 \u03c0 Figure 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing without Spurious Ambiguity",
"sec_num": "3"
},
{
"text": "The dynamic programming arc-standard transition-based deductive system without spurious ambiguity: the symbol represents that the root node of the topmost element on the stack has not been scanned yet. Table 2 : Unlabeled accuracy scores (UAS) and parsing times (+forest dumping times, second per sentence) for parsing development (WSJ22) and test (WSJ23) data with spurious shift-reduce and proposed shift-reduce parser (non-sp.) using several beam sizes.",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 209,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing without Spurious Ambiguity",
"sec_num": "3"
},
{
"text": "We used an early update version of the averaged perceptron algorithm (Collins and Roark, 2004; Huang et al., 2012) to train two shift-reduce dependency parsers with beam size of 12. Table 2 shows experimental results of parsing the development and test datasets with each of the spurious and non-spurious shift-reduce parsers using several beam sizes. Parsing accuracies were evaluated by unlabeled accuracy scores (UAS) with and without punctuations. The parsing times were measured on an Intel Core i7 2.8GHz. The average cpu time (per sentence) includes that of dumping packed forests. This result indicates that the non-spurious parser achieves better accuracies than the spurious beam size 8 32 128 % of distinct trees 1093.5 94.8 95.0 % of distinct trees 10081.8 84.9 87.2 % of distinct trees 100070.6 73.1 77.6 % of distinct trees (10000) 62.1 64.3 65.6 Table 3 : The percentages of distinct dependency trees in 10, 100, 1000 and 10000 best trees extracted from spurious forests with several beam sizes.",
"cite_spans": [
{
"start": 69,
"end": 94,
"text": "(Collins and Roark, 2004;",
"ref_id": "BIBREF5"
},
{
"start": 95,
"end": 114,
"text": "Huang et al., 2012)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 182,
"end": 189,
"text": "Table 2",
"ref_id": null
},
{
"start": 861,
"end": 868,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing without Spurious Ambiguity",
"sec_num": "3"
},
{
"text": "parser without loss of efficiency. Figure 4 shows oracle unlabeled accuracies of spurious k-best lists, non-spurious k-best lists, spurious forests, and non-spurious forests. We extract an oracle tree from each packed forest using the for- \"kbest\" \"forest\" \"non-sp-kbest\" \"non-sp-forest\" Figure 4 : Each plot shows oracle unlabeled accuracies of spurious k-best lists, spurious forests, and non-spurious forests. The oracle accuracies are evaluated using UAS with punctuations. est oracle algorithm (Huang, 2008) . Both forests produce much better results than the k-best lists, and non-spurious forests have almost the same oracle accuracies as spurious forests.",
"cite_spans": [
{
"start": 499,
"end": 512,
"text": "(Huang, 2008)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 35,
"end": 43,
"text": "Figure 4",
"ref_id": null
},
{
"start": 288,
"end": 296,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing without Spurious Ambiguity",
"sec_num": "3"
},
{
"text": "However, as shown in Table 3 , spurious forests encode a number of non-unique dependency trees while all dependency trees in non-spurious forests are distinct from each other.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Arc-Standard Shift-Reduce Parsing without Spurious Ambiguity",
"sec_num": "3"
},
{
"text": "We define a reranking model based on the graphbased features as the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Reranking Model",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y = argmax y\u2208H \u03b1 \u2022 f g (x, y)",
"eq_num": "(1)"
}
],
"section": "Discriminative Reranking Model",
"sec_num": "5.1"
},
{
"text": "where \u03b1 is a weight vector, f g is a feature vector (g indicates \"graph-based\"), x is the input sentence, y is a dependency tree and H is a dependency forest. This model assumes a hyperedge factorization which induces a decomposition of the feature vector as the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Reranking Model",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 \u2022 f g (x, y) = \u2211 e\u2208y \u03b1 \u2022 f g,e (e).",
"eq_num": "(2)"
}
],
"section": "Discriminative Reranking Model",
"sec_num": "5.1"
},
{
"text": "The search problem can be solved by simply using the (generalized) Viterbi algorithm (Klein and Manning, 2001 ). When using non-local features, the hyperedge factorization is redefined to the following:",
"cite_spans": [
{
"start": 85,
"end": 109,
"text": "(Klein and Manning, 2001",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Reranking Model",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 \u2022 f g (x, y) = \u2211 e\u2208y \u03b1 \u2022 f g,e (e) + \u03b1 \u2022 f g,e,N (e)",
"eq_num": "(3)"
}
],
"section": "Discriminative Reranking Model",
"sec_num": "5.1"
},
{
"text": "where f g,e,N is a non-local feature vector. Though the cube-pruning algorithm (Huang and Chiang, 2007) is an approximate decoding technique based on a k-best Viterbi algorithm, it can calculate the non-local scores efficiently. The baseline score can be taken into the reranker as a linear interpolation:",
"cite_spans": [
{
"start": 79,
"end": 103,
"text": "(Huang and Chiang, 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Reranking Model",
"sec_num": "5.1"
},
{
"text": "y = argmax y\u2208H \u03b2 \u2022 sc tr (x, y) + \u03b1 \u2022 f g (x, y) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Reranking Model",
"sec_num": "5.1"
},
{
"text": "where sc tr is the score from the baseline parser (tr indicates \"transition-based\"), and \u03b2 is a scaling factor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Reranking Model",
"sec_num": "5.1"
},
{
"text": "While the inference algorithm is a simple Viterbi algorithm, the discriminative model can use all trisibling features and some grand-sibling features 2 (Koo and Collins, 2010) as a local scoring factor in addition to the first-and sibling second-order graphbased features. This is because the first stage shiftreduce parser uses features described in Section 2 and this information can be encoded into vertices of a hypergraph.",
"cite_spans": [
{
"start": 152,
"end": 175,
"text": "(Koo and Collins, 2010)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "5.2.1"
},
{
"text": "The reranking model also uses guide features extracted from the 1-best tree predicted by the first stage shift-reduce parser. We define the guide features as first-order relations like those used in Nivre and McDonald (2008) though our parser handles only unlabeled and projective dependency structures. We summarize the features for discriminative reranking model as the following:",
"cite_spans": [
{
"start": 199,
"end": 224,
"text": "Nivre and McDonald (2008)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "5.2.1"
},
{
"text": "\u2022 First-and second-order features: these features are the same as those used in MST parser 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "5.2.1"
},
{
"text": "\u2022 Grand-child features: we define tri-gram POS features with POS tags of grand parent, parent, and rightmost or leftmost child.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "5.2.1"
},
{
"text": "\u2022 Tri-sibling features: we define tri-gram features with three POS-tags of child, sibling, and trisibling. We also define tri-gram features with one word and two POS tags of the above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "5.2.1"
},
{
"text": "\u2022 Guide feaures: we define a feature indicating whether an arc from a child to its parent is present in the 1-best tree predicted by the firststage shift-reduce parser, conjoined with the POS tags of the parent and child.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "5.2.1"
},
{
"text": "\u2022 PP-Attachment features: when a parent word is a preposition, we define tri-gram features with the parent word and POS tags of grand parent and the rightmost child.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "5.2.1"
},
{
"text": "To define richer features as a non-local factor, we extend a local reranking algorithm by augmenting each k-best item with all child vertices of its head vertex 4 . Information about all children enables the reranker to calculate the following features when reducing the head vertex:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-local Features",
"sec_num": "5.2.2"
},
{
"text": "\u2022 Grand-child features: we define tri-gram features with one word and two POS tags of grand parent, parent, and child.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-local Features",
"sec_num": "5.2.2"
},
{
"text": "\u2022 Grand-sibling features: we define 4-gram POS features with POS tags of grand parent, parent, child and sibling. We also define coordination features with POS tags of grand parent, parent and child when the sibling word is a coordinate conjunction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-local Features",
"sec_num": "5.2.2"
},
{
"text": "\u2022 Valency features: we define a feature indicating the number of children of a head, conjoined with each of its word and POS tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-local Features",
"sec_num": "5.2.2"
},
{
"text": "When using non-local features, we removed the local grand-child features from the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-local Features",
"sec_num": "5.2.2"
},
{
"text": "A discriminative reranking model is trained on packed forests by using their oracle trees as the correct parse. More accurate oracles are essential to train a discriminative reranking model well. While large size forests have much more accurate oracles than small size forests, large forests have too many hyperedges to train a discriminative model on them, as shown in Figure 4 . The usual forest reranking algorithms (Huang, 2008; Hayashi et al., 2011) remove low quality hyperedges from large forests by using inside-outside forest pruning.",
"cite_spans": [
{
"start": 419,
"end": 432,
"text": "(Huang, 2008;",
"ref_id": "BIBREF16"
},
{
"start": 433,
"end": 454,
"text": "Hayashi et al., 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 370,
"end": 378,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Oracle for Discriminative Training",
"sec_num": "5.3"
},
{
"text": "However, producing large forests and pruning them is computationally very expensive. Instead, we propose a simpler method to produce small forests which have more accurate oracles by forcing the beam search shift-reduce parser to keep the correct state in the beam buffer. As a result, the correct tree will always be encoded in a packed forest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle for Discriminative Training",
"sec_num": "5.3"
},
{
"text": "6 Experiments (Discriminative Reranking)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle for Discriminative Training",
"sec_num": "5.3"
},
{
"text": "Following (Huang, 2008) , the training set (WSJ02-21) is split into 20 folds, and each fold is parsed by each of the spurious and non-spurious shift-reduce parsers using beam size 12 with the model trained on sentences from the remaining 19 folds, dumping the outputs as packed forests.",
"cite_spans": [
{
"start": 10,
"end": 23,
"text": "(Huang, 2008)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "6.1"
},
{
"text": "The reranker is modeled by either equation 1or (4). By our preliminary experiments using development data (WSJ22), we modeled the reranker with equation 1when training, and with equation 4when testing 5 (i.e., the scores of the first-stage parser are not considered during training of the reranking model). This prevents the discriminative reranking features from under-training (Sutton et al., 2006; Hollingshead and Roark, 2008) .",
"cite_spans": [
{
"start": 379,
"end": 400,
"text": "(Sutton et al., 2006;",
"ref_id": "BIBREF30"
},
{
"start": 401,
"end": 430,
"text": "Hollingshead and Roark, 2008)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "6.1"
},
{
"text": "A discriminative reranking model is trained on the packed forests by using the averaged perceptron algorithm with 5 iterations. When training nonlocal reranking models, we set k-best size of cubepruning to 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "6.1"
},
{
"text": "For dumping packed forests for test data, spurious and non-spurious shift-reduce parsers are trained by the averaged perceptron algorithm. In all experiments on English data, we fixed beam size to 12 for training both parsers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "6.1"
},
{
"text": "We show the comparison of dumped spurious and non-spurious packed forests for training data in Ta method described in Section 5.3. The 1-best accuracy of the non-spurious forests is higher than that of the spurious forests. As we expected, the results show that there are many non-unique dependency trees in the spurious forests. The spurious forests also get larger than the non-spurious forests. Table 5 shows how long the training on spurious and non-spurious forests took on an Opteron 8356 2.3GHz. It is clear from the results that training on non-spurious forests is more efficient than that on spurious forests. Table 6 shows the statistics of spurious and nonspurious packed forests dumped by shift-reduce parsers using beam size 12 for test data. The trends are similar to those for training data shown in Table 4. We show the results of the forest reranking algorithms for test data in Table 5 : Training times on both spurious and nonspurious packed forests (beam 12): pre-comp. denotes cpu time for feature extraction and attaching features to all hyperedges. The non-local models were trained setting k-best size of cube-pruning to 5, and non-local features were calculated on-the-fly while training. packed forests using four beam sizes 8, 12, 32, and 64. The reranking on non-spurious forests achieves better accuracies and is slightly faster than that on spurious forests consistently.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 97,
"text": "Ta",
"ref_id": null
},
{
"start": 398,
"end": 405,
"text": "Table 5",
"ref_id": null
},
{
"start": 619,
"end": 626,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 896,
"end": 903,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test with Gold POS tags",
"sec_num": "6.2"
},
{
"text": "To compare the proposed reranking system with other systems, we evaluate its parsing accuracy on test data with automatic POS tags. We used the Stanford POS tagger 6 with a model trained on sections 02-21 to tag development and test data, and used 10-way jackknifing to tag training data. The tagging accuracies on training, development, and test data were 97.1, 97.2, and 97.5. Table 8 : Comparison with other systems: the results were evaluated on testing data (WSJ23) with automatic POS tags: label means labeled dependency parsing and the cpu times of our systems were taken on Intel Core i7 2.8GHz.",
"cite_spans": [],
"ref_spans": [
{
"start": 379,
"end": 386,
"text": "Table 8",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Test with Automatic POS tags",
"sec_num": "6.3"
},
{
"text": "our proposed systems together with results from related work. The parsing times are reported in tokens/second for comparison. Note that, however, the difference of the parsing time does not represent the efficiency of the algorithm directly because each system was implemented in different programming language and the times were measured on different environments. The accuracy of local reranking on non-spurious forests is the best among unlabeled shift-reduce parsers, but slightly behind the third-order graphbased systems (Koo and Collins, 2010; Zhang and McDonald, 2012; Rush and Petrov, 2012) . It is likely that the difference comes from the fact that our local reranking model can define only some of the grand-child related features. w/ guide. w/o guide. P P P P P P P P To define all grand-child features and other nonlocal features, we also experimented with the nonlocal reranking algorithm on non-spurious packed forests. It achieved almost the same accuracy as the previous third-order graph-based algorithms. Moreover, the computational overhead is very small when setting k-best size of cube-pruning small.",
"cite_spans": [
{
"start": 527,
"end": 550,
"text": "(Koo and Collins, 2010;",
"ref_id": "BIBREF18"
},
{
"start": 551,
"end": 576,
"text": "Zhang and McDonald, 2012;",
"ref_id": "BIBREF33"
},
{
"start": 577,
"end": 599,
"text": "Rush and Petrov, 2012)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test with Automatic POS tags",
"sec_num": "6.3"
},
{
"text": "One advantage of our reranking approach is that guide features can be defined as in stacked parsing. To analyze the effect of the guide features on parsing accuracy, we remove the guide features from baseline reranking models with and without non-local features used in Section 6.3. The results are shown in Table 9 and 10. The parsing accuracies of the baseline reranking models are better than those of the models without guide features though the number of guide features is not large. Additionally, each model with guide features is smaller than that without guide features. This indicates that stacking has a good effect on training the models.",
"cite_spans": [],
"ref_spans": [
{
"start": 308,
"end": 315,
"text": "Table 9",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6.4"
},
{
"text": "To further investigate the effects of guide features, we tried to define unlabeled versions of the secondorder guide features used in (Martins et al., 2008; McClosky et al., 2012) . However, these features did not produce good results, and investigation to find the cause is an important future work.",
"cite_spans": [
{
"start": 134,
"end": 156,
"text": "(Martins et al., 2008;",
"ref_id": "BIBREF20"
},
{
"start": 157,
"end": 179,
"text": "McClosky et al., 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6.4"
},
{
"text": "We also examined parsing errors in more detail. Table 11 shows root and sentence complete rates of three systems, the non-spurious shift-reduce w/ guide. w/o guide. P P P P P P P P parser, local reranking, and non-local reranking. The two reranking systems outperform the shiftreduce parser significantly, and the non-local reranking system is the best among them. Part of the difference between the shift-reduce parser and reranking systems comes from the correction of coordination errors. Table 12 shows the head correct rate, recall, precision, F-measure and complete rate of coordination structures, by which we mean the head and siblings of a token whose POS tag is CC. The head correct rate denotes how correct a head of the CC token is. The recall, precision, F-measure are measured by counting arcs between the head and siblings. When the head of the CC token is incorrect, all arcs of the coordination structure are counted as incorrect. Therefore, the recall, precision, F-measure are greatly affected by the head correct rate, and though the complete rate of non-local reranking is higher than that of local reranking, the results of the first three measures are lower. We assume that the improvements of non-local reranking over the others can be mainly attributed to the better prediction of the structures around the sentence root because most of the non-local features are useful for predicting these structures. Table 13 shows the recall, precision and F-measure of grandchild structures whose grand parent is a sentence root symbol $. The results support the above assumption. The root correct rate directly influences on prediction of the overall structures of a sentence, and it is likely that the reduction of root prediction errors brings better results.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 56,
"text": "Table 11",
"ref_id": "TABREF12"
},
{
"start": 492,
"end": 500,
"text": "Table 12",
"ref_id": "TABREF14"
},
{
"start": 1429,
"end": 1437,
"text": "Table 13",
"ref_id": "TABREF15"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6.4"
},
{
"text": "We also experiment on the Penn Chinese Treebank (CTB5). Following Huang and Sagae (2010), we split it into training (secs 001-815 and 1001-1136), development (secs 886-931 and 1148-1151), and test (secs 816-885 and 1137-1147) sets, and use the head rules of Zhang and Clark (2008) . The training set is split into 10 folds to dump packed forests for training of reranking models.",
"cite_spans": [
{
"start": 258,
"end": 280,
"text": "Zhang and Clark (2008)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on Chinese",
"sec_num": "6.5"
},
{
"text": "We set the beam size of both spurious and nonspurious parsers to 12, and the number of perceptron training iterations to 25 for the parsers and to 8 for both rerankers. The graph-based approach employs Eisner and Satta (1999) 's algorithm where spurious ambiguities are eliminated by the notion of split head automaton grammars (Alshawi, 1996) . However, the arc-standard transition-based parser has the spurious ambiguity problem. Cohen et al. (2012) proposed a method to eliminate the spurious ambiguity of shift-reduce transition systems. Their method covers existing systems such as the arcstandard and non-projective transition-based parsers (Attardi, 2006) . Our system copes only with the projective case, but is simpler than theirs and we show its efficacy empirically through some experiments.",
"cite_spans": [
{
"start": 202,
"end": 225,
"text": "Eisner and Satta (1999)",
"ref_id": "BIBREF6"
},
{
"start": 328,
"end": 343,
"text": "(Alshawi, 1996)",
"ref_id": "BIBREF0"
},
{
"start": 432,
"end": 451,
"text": "Cohen et al. (2012)",
"ref_id": "BIBREF4"
},
{
"start": 647,
"end": 662,
"text": "(Attardi, 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on Chinese",
"sec_num": "6.5"
},
{
"text": "The arc-eager shift-reduce parser also has a spurious ambiguity problem. Goldberg and Nivre (2012) addressed this problem by not only training with a canonical transition sequence but also with alternate optimal transitions that are calculated dynamically for a current state.",
"cite_spans": [
{
"start": 73,
"end": 98,
"text": "Goldberg and Nivre (2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on Chinese",
"sec_num": "6.5"
},
{
"text": "Higher-order features like third-order dependency relations are essential to improve dependency parsing accuracy (Koo and Collins, 2010; Rush and Petrov, 2012; Zhang and McDonald, 2012) . A reranking approach is one effective solution to introduce rich features to a parser model in the context of constituency parsing (Charniak and Johnson, 2005; Huang, 2008) . Hall (2007) applied a k-best maximum spanning tree algorithm to non-projective dependency analysis, and showed that k-best discriminative reranking improves parsing accuracy in several languages. Sangati et al. (2009) proposed a k-best dependency reranking algorithm using a third-order generative model, and Hayashi et al. (2011) extended it to a forest algorithm. Though forest reranking requires some approximations such as cube-pruning to integrate non-local features, it can explore larger search space than k-best reranking.",
"cite_spans": [
{
"start": 113,
"end": 136,
"text": "(Koo and Collins, 2010;",
"ref_id": "BIBREF18"
},
{
"start": 137,
"end": 159,
"text": "Rush and Petrov, 2012;",
"ref_id": "BIBREF26"
},
{
"start": 160,
"end": 185,
"text": "Zhang and McDonald, 2012)",
"ref_id": "BIBREF33"
},
{
"start": 319,
"end": 347,
"text": "(Charniak and Johnson, 2005;",
"ref_id": "BIBREF3"
},
{
"start": 348,
"end": 360,
"text": "Huang, 2008)",
"ref_id": "BIBREF16"
},
{
"start": 363,
"end": 374,
"text": "Hall (2007)",
"ref_id": "BIBREF9"
},
{
"start": 559,
"end": 580,
"text": "Sangati et al. (2009)",
"ref_id": "BIBREF28"
},
{
"start": 672,
"end": 693,
"text": "Hayashi et al. (2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods to Improve Dependency Parsing",
"sec_num": "7.2"
},
{
"text": "The stacking approach (Nivre and McDonald, 2008; Martins et al., 2008) uses the output of one dependency parser to provide guide features for another. Stacking improves the parsing accuracy of second stage parsers on various language datasets. The joint graph-based and transition-based approach (Zhang and Clark, 2008; Bohnet and Kuhn, 2012) uses an arc-eager shift-reduce parser with a joint graph-based and transition-based model. Though it improves parsing accuracy significantly, the large beam size of the shift-reduce parser harms its efficiency. Sagae and Lavie (2006) showed that combining the outputs of graph-based and transitionbased parsers can improve parsing accuracies.",
"cite_spans": [
{
"start": 22,
"end": 48,
"text": "(Nivre and McDonald, 2008;",
"ref_id": "BIBREF23"
},
{
"start": 49,
"end": 70,
"text": "Martins et al., 2008)",
"ref_id": "BIBREF20"
},
{
"start": 296,
"end": 319,
"text": "(Zhang and Clark, 2008;",
"ref_id": "BIBREF32"
},
{
"start": 320,
"end": 342,
"text": "Bohnet and Kuhn, 2012)",
"ref_id": "BIBREF2"
},
{
"start": 554,
"end": 576,
"text": "Sagae and Lavie (2006)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods to Improve Dependency Parsing",
"sec_num": "7.2"
},
{
"text": "We have presented a discriminative forest reranking algorithm for dependency parsing. This can be seen as a kind of joint transition-based and graph-based approach because the first-stage parser is a shiftreduce parser and the second-stage reranker uses a graph-based model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Additionally, we have proposed a dynamic programming arc-standard transition-based dependency parser without spurious ambiguity, along with a heuristic that encodes the correct tree in the output packed forest for reranker training, and shown that forest reranking works well on packed forests produced by the proposed parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "To improve the accuracy of reranking, we will engage in feature engineering. We need to further investigate effective higher-order guide and non-local features. It also seems promising to extend the unlabeled reranker to a labeled one because labeled information often improves unlabeled accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "In this paper, we adopt a reranking approach, but a rescoring approach is more promising to improve efficiency because it does not have the overhead of dumping packed forests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "supported by Grant-in-Aid for Japan Society for the Promotion of Science (JSPS) Research Fellowship for Young Scientists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Huang and Sagae (2010)'s dynamic programming is based on a notion of a push computation(Kuhlmann et al., 2011). The details are out of scope here and readers may refer to the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The grand-child and grand-sibling features can be used only when interacting with the leftmost or rightmost child and sibling. In case of local reranking, we did not use grand-sibling features because in our experiments, they were not effective.3 http://www.seas.upenn.edu/\u02dcstrctlrn/ MSTParser/MSTParser.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "If each item is augmented with richer information, even features based on the entire subtree can be defined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The scaling factor \u03b2 was tuned by minimum error rate training (MERT) algorithm(Och, 2003) using development data. The MERT algorithm is suited to tune low-dimensional parameters. The \u03b2 was set to about 1.2 in case of local reranking, and to about 1.5 in case of non-local reranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.stanford.edu/software/ tagger.shtml",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their valuable comments. This work was partly",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Head automata for speech translation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Alshawi",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. the ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Alshawi. 1996. Head automata for speech translation. In Proc. the ICSLP.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Experiments with a multilanguage non-projective dependency parser",
"authors": [
{
"first": "G",
"middle": [],
"last": "Attardi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of the 10th Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "166--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Attardi. 2006. Experiments with a multilanguage non-projective dependency parser. In Proc. of the 10th Conference on Natural Language Learning, pages 166-170.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The best of bothworldsa graph-based completion model for transition-based parsers",
"authors": [
{
"first": "B",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "77--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Bohnet and J. Kuhn. 2012. The best of bothworlds - a graph-based completion model for transition-based parsers. In Proceedings of the 13th Conference of the European Chapter of the Association for Compu- tational Linguistics, pages 77-87.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Coarse-to-fine nbest parsing and maxent discriminative reranking",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Charniak and M. Johnson. 2005. Coarse-to-fine n- best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting of the Associ- ation for Computational Linguistics, pages 173-180.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Elimination of spurious ambiguity in transition-based dependency parsing",
"authors": [
{
"first": "S",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. B. Cohen, C. G\u00f3mez-Rodr\u00edguez, and G. Satta. 2012. Elimination of spurious ambiguity in transition-based dependency parsing. Technical report.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Incremental parsing with the perceptron algorithm",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL'04)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins and B. Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL'04).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Efficient parsing for bilexical context-free grammars and head automaton grammars",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Eisner",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "457--464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. M. Eisner and G. Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In Proceedings of the 37th Annual Meet- ing of the Association for Computational Linguistics, pages 457-464.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bilexical grammars and a cubic-time probabilistic parser",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 5th International Workshop on Parsing Technologies (IWPT)",
"volume": "",
"issue": "",
"pages": "54--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner. 1997. Bilexical grammars and a cubic-time probabilistic parser. In Proceedings of the 5th Inter- national Workshop on Parsing Technologies (IWPT), pages 54-65.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A dynamic oracle for arc-eager dependency parsing",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 24rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Goldberg and J. Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In Proceedings of the 24rd International Conference on Computational Lin- guistics (Coling 2012).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "K-best spanning tree parsing",
"authors": [
{
"first": "K",
"middle": [],
"last": "Hall",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "392--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Hall. 2007. K-best spanning tree parsing. In Proceed- ings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 392-399.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The third-order variational reranking on packed-shared dependency forests",
"authors": [
{
"first": "K",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Asahara",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1479--1488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Hayashi, T. Watanabe, M. Asahara, and Y. Mat- sumoto. 2011. The third-order variational reranking on packed-shared dependency forests. In Proceedings of the 2011 Conference on Empirical Methods in Nat- ural Language Processing, pages 1479-1488.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Head-driven transition-based parsing with top-down prediction",
"authors": [
{
"first": "K",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Asahara",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "657--665",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Hayashi, T. Watanabe, M. Asahara, and Y. Mat- sumoto. 2012. Head-driven transition-based parsing with top-down prediction. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 657-665.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Reranking with baseline system scores and ranks as features",
"authors": [
{
"first": "K",
"middle": [],
"last": "Hollingshead",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Hollingshead and B. Roark. 2008. Reranking with baseline system scores and ranks as features. In CSLU-08-001, Center for Spoken Language Under- standing, Oregon Health and Science University.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Forest rescoring: Faster decoding with integrated language models",
"authors": [
{
"first": "L",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "144--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Huang and D. Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Pro- ceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 144-151.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Dynamic programming for linear-time incremental parsing",
"authors": [
{
"first": "L",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sagae",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL'10)",
"volume": "",
"issue": "",
"pages": "1077--1086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Huang and K. Sagae. 2010. Dynamic programming for linear-time incremental parsing. In Proceedings of the 48th Annual Meeting of the Association for Com- putational Linguistics (ACL'10), pages 1077-1086.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Structured perceptron with inexact search",
"authors": [
{
"first": "L",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Fayong",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "142--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Huang, S. Fayong, and Y. Guo. 2012. Structured per- ceptron with inexact search. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 142-151.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Forest reranking: Discriminative parsing with non-local features",
"authors": [
{
"first": "L",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "586--594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Huang. 2008. Forest reranking: Discriminative pars- ing with non-local features. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, pages 586-594.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Parsing and hypergraphs",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 7th International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. D. Manning. 2001. Parsing and hyper- graphs. In Proceedings of the 7th International Work- shop on Parsing Technologies.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Efficient third-order dependency parsers",
"authors": [
{
"first": "T",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL'10)",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Koo and M. Collins. 2010. Efficient third-order de- pendency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguis- tics (ACL'10), pages 1-11.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dynamic programming algorithms for transitionbased dependency parsers",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "673--682",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Kuhlmann, C. G\u00f3mez-Rodr\u00edguez, and G. Satta. 2011. Dynamic programming algorithms for transition- based dependency parsers. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 673-682.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Stacking dependency parsers",
"authors": [
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Das",
"suffix": ""
},
{
"first": "E",
"middle": [
"P"
],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "157--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9 F. T. Martins, D. Das, N. A. Smith, and E. P. Xing. 2008. Stacking dependency parsers. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 157-166.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Stanfords system for parsing the english web",
"authors": [
{
"first": "D",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL) at NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. McClosky, W. Che, M. Recasens, M. Wang, R. Socher, and C. D. Manning. 2012. Stanfords system for pars- ing the english web. In Proceedings of First Work- shop on Syntactic Analysis of Non-Canonical Lan- guage (SANCL) at NAACL 2012.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald, K. Crammer, and F. Pereira. 2005. On- line large-margin training of dependency parsers. In Proceedings of the 43rd Annual Meeting of the Asso- ciation for Computational Linguistics (ACL'05), pages 91-98.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Integrating graphbased and transition-based dependency parsers",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "950--958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nivre and R. McDonald. 2008. Integrating graph- based and transition-based dependency parsers. In Proceedings of ACL-08: HLT, pages 950-958.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Algorithms for deterministic incremental dependency parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "",
"pages": "513--553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nivre. 2008. Algorithms for deterministic incremen- tal dependency parsing. Computational Linguistics, 34:513-553.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. the 41st ACL",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och. 2003. Minimum error rate training in statisti- cal machine translation. In Proc. the 41st ACL, pages 160-167.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Vine pruning for efficient multi-pass dependency parsing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Rush",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "498--507",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Rush and S. Petrov. 2012. Vine pruning for effi- cient multi-pass dependency parsing. In Proceedings of the 2012 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 498-507.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Parser combination by reparsing",
"authors": [
{
"first": "K",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. HLT",
"volume": "",
"issue": "",
"pages": "129--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Sagae and A. Lavie. 2006. Parser combination by reparsing. In Proc. HLT, pages 129-132.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A generative re-ranking model for dependency parsing",
"authors": [
{
"first": "F",
"middle": [],
"last": "Sangati",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zuidema",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bod",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 11th International Conference on Parsing Technologies (IWPT'09)",
"volume": "",
"issue": "",
"pages": "238--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Sangati, W. Zuidema, and R. Bod. 2009. A generative re-ranking model for dependency parsing. In Proceed- ings of the 11th International Conference on Parsing Technologies (IWPT'09), pages 238-241.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Principles and implementation of deductive parsing",
"authors": [
{
"first": "S",
"middle": [
"M"
],
"last": "Shieber",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Schabes",
"suffix": ""
},
{
"first": "F",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 1995,
"venue": "J. Log. Program",
"volume": "24",
"issue": "1&2",
"pages": "3--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. M. Shieber, Y. Schabes, and F. C. N. Pereira. 1995. Principles and implementation of deductive parsing. J. Log. Program., 24(1&2):3-36.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Reducing weight undertraining in structured discriminative learning",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sindelar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2006,
"venue": "Conference on Human Language Technology and North American Association for Computational Linguistics (HLT-NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Sutton, M. Sindelar, and A. McCallum. 2006. Reduc- ing weight undertraining in structured discriminative learning. In Conference on Human Language Tech- nology and North American Association for Computa- tional Linguistics (HLT-NAACL).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Statistical dependency analysis with support vector machines",
"authors": [
{
"first": "H",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 10th International Conference on Parsing Technologies (IWPT'03)",
"volume": "",
"issue": "",
"pages": "195--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Yamada and Y. Matsumoto. 2003. Statistical depen- dency analysis with support vector machines. In Pro- ceedings of the 10th International Conference on Pars- ing Technologies (IWPT'03), pages 195-206.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A tale of two parsers: Investigating and combining graph-based and transitionbased dependency parsing using beam-search",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "562--571",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Zhang and S. Clark. 2008. A tale of two parsers: In- vestigating and combining graph-based and transition- based dependency parsing using beam-search. In Pro- ceedings of the 2008 Conference on Empirical Meth- ods in Natural Language Processing, pages 562-571.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Generalized higherorder dependency parsing with cube pruning",
"authors": [
{
"first": "H",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "320--331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Zhang and R. McDonald. 2012. Generalized higher- order dependency parsing with cube pruning. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Compu- tational Natural Language Learning, pages 320-331.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Transition-based dependency parsing with rich non-local features",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "188--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Zhang and J. Nivre. 2011. Transition-based depen- dency parsing with rich non-local features. In Pro- ceedings of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 188-193.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": ", j, s d |s d\u22121 | . . . |s 1 |s 0 ) : \u2113 + 1 : (j, j + 1, s d\u22121 |s d\u22122 | . . . |s 0 |w j ) : (p)",
"num": null
},
"TABREF3": {
"content": "<table><tr><td/><td>sp.</td><td>non-sp.</td></tr><tr><td>ave. # of hyperedges</td><td>141.9</td><td>133.3</td></tr><tr><td>ave. # of vertices</td><td>199.1</td><td>187.6</td></tr><tr><td>ave. % of distinct trees</td><td>82.5</td><td>100.0</td></tr><tr><td>1-best UAS w/ punc.</td><td>92.5</td><td>92.6</td></tr><tr><td>oracle UAS w/ punc.</td><td>100.0</td><td>100.0</td></tr></table>",
"text": "Unlabeled accuracy scores and cpu times per sentence (parsing+reranking) when parsing and reranking test data (WSJ23) with gold POS tags: shift-reduce parser is denoted as sr (beam size, k: k-best size of cube pruning).",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"content": "<table/>",
"text": "Comparison of spurious (sp.) and non-spurious (non-sp.) forests: each forest is produced by baseline and proposed shift-reduce parsers using beam size 12 for 39832 training sentences with gold POS tags.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF5": {
"content": "<table><tr><td>. Each spu-</td></tr><tr><td>rious and non-spurious shift-reduce parser produces</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF6": {
"content": "<table><tr><td>lists the accuracy and parsing speed of</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF7": {
"content": "<table><tr><td>system</td><td colspan=\"2\">tok./sec. UAS w/o punc.</td></tr><tr><td>sr (12)</td><td>2130</td><td>92.5</td></tr><tr><td>w/ local (12)</td><td>1290</td><td>92.8</td></tr><tr><td>non-sp sr (12)</td><td>1950</td><td>92.6</td></tr><tr><td>w/ local (12)</td><td>1300</td><td>92.98</td></tr><tr><td>w/ non-local (12, k=1) w/ non-local (12, k=3) w/ non-local (12, k=12)</td><td>1280 1180 1060</td><td>93.1 93.12 93.12</td></tr><tr><td>Huang10 sr (8)</td><td>782</td><td>92.1</td></tr><tr><td>Rush12 sr (16)</td><td>4780</td><td>92.5</td></tr><tr><td>Rush12 sr (64)</td><td>1280</td><td>92.7</td></tr><tr><td>Koo10</td><td>-</td><td>93.04</td></tr><tr><td>Rush12 third</td><td>20</td><td>93.3</td></tr><tr><td>Rush12 vine</td><td>4400</td><td>93.1</td></tr><tr><td>H-Zhang12 third</td><td>50</td><td>92.81</td></tr><tr><td>H-Zhang12 (label)</td><td>220</td><td>93.06</td></tr><tr><td>Y-Zhang11 (64, label)</td><td>680</td><td>92.9</td></tr><tr><td>Bohnet12 (80, label)</td><td>120</td><td>93.39</td></tr></table>",
"text": "Comparison of spurious (sp.) and non-spurious (non-sp.) forests: each forest is produced by baseline and proposed shift-reduce parsers using beam size 12 for test data (WSJ23) with gold POS tags.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF9": {
"content": "<table><tr><td>: Accuracy and the number of non-zero weighted features of the local reranking models with and without guide features: the first-and second-order features are named for MSTParser.</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF11": {
"content": "<table><tr><td colspan=\"3\">: Accuracy and the number of non-zero weighted features of the non-local reranking models with and with-out guide features: the first-and second-order features are named for MSTParser.</td></tr><tr><td>system</td><td colspan=\"2\">UAS root comp.</td></tr><tr><td>non-sp sr</td><td>92.6 95.8</td><td>45.6</td></tr><tr><td>local</td><td>92.98 96.1</td><td>48.1</td></tr><tr><td colspan=\"2\">non-local 93.12 96.3</td><td>48.2</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF12": {
"content": "<table><tr><td>: Unlabeled accuracy, root correct rate, and sen-tence complete rate: these scores are measured on test data (WSJ23) without punctuations.</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF14": {
"content": "<table><tr><td colspan=\"4\">: Head correct rate, recall, precision, F-measure, and complete rate of coordination strutures: these are measured on test data (WSJ23).</td></tr><tr><td>system</td><td colspan=\"3\">recall precision F-measure</td></tr><tr><td colspan=\"2\">non-sp sr 91.58</td><td>92.5</td><td>92.04</td></tr><tr><td>local</td><td>91.96</td><td>92.95</td><td>92.45</td></tr><tr><td colspan=\"2\">non-local 92.44</td><td>93.07</td><td>92.75</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF15": {
"content": "<table><tr><td>: Recall, precision, and F-measure of grand-child structures whose grand parent is an artificial root symbol: these are measured on test data (WSJ23).</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF16": {
"content": "<table><tr><td>system</td><td colspan=\"2\">UAS root comp.</td></tr><tr><td>sr (12)</td><td>85.3 78.6</td><td>33.4</td></tr><tr><td colspan=\"2\">w/ non-local (12, k=3) 85.8 79.4</td><td>34.2</td></tr><tr><td>non-sp sr (12)</td><td>85.3 78.4</td><td>33.7</td></tr><tr><td colspan=\"2\">w/ non-local (12, k=3) 85.9 79.6</td><td>34.3</td></tr></table>",
"text": "shows the results for the test sets. As we expected, reranking on non-spurious forests outperforms that on spurious forests.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF17": {
"content": "<table><tr><td>7 Related Works</td></tr><tr><td>7.1 How to Handle Spurious Ambiguity</td></tr></table>",
"text": "Results on Chinese Treebank data (CTB5): evaluations are performed without punctuations.",
"type_str": "table",
"html": null,
"num": null
}
}
}
}