Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K15-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:08:45.182634Z"
},
"title": "Do dependency parsing metrics correlate with human judgments?",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Mart\u00ednez",
"middle": [],
"last": "Alonso",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Eljko",
"middle": [],
"last": "Agi\u0107",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Danijela",
"middle": [],
"last": "Merkler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zagreb",
"location": {
"country": "Croatia"
}
},
"email": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Using automatic measures such as labeled and unlabeled attachment scores is common practice in dependency parser evaluation. In this paper, we examine whether these measures correlate with human judgments of overall parse quality. We ask linguists with experience in dependency annotation to judge system outputs. We measure the correlation between their judgments and a range of parse evaluation metrics across five languages. The humanmetric correlation is lower for dependency parsing than for other NLP tasks. Also, inter-annotator agreement is sometimes higher than the agreement between judgments and metrics, indicating that the standard metrics fail to capture certain aspects of parse quality, such as the relevance of root attachment or the relative importance of the different parts of speech.",
"pdf_parse": {
"paper_id": "K15-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "Using automatic measures such as labeled and unlabeled attachment scores is common practice in dependency parser evaluation. In this paper, we examine whether these measures correlate with human judgments of overall parse quality. We ask linguists with experience in dependency annotation to judge system outputs. We measure the correlation between their judgments and a range of parse evaluation metrics across five languages. The humanmetric correlation is lower for dependency parsing than for other NLP tasks. Also, inter-annotator agreement is sometimes higher than the agreement between judgments and metrics, indicating that the standard metrics fail to capture certain aspects of parse quality, such as the relevance of root attachment or the relative importance of the different parts of speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In dependency parser evaluation, the standard accuracy metrics-labeled and unlabeled attachment scores-are defined simply as averages over correct attachment decisions. Several authors have pointed out problems with these metrics; they are both sensitive to annotation guidelines (Schwartz et al., 2012; Tsarfaty et al., 2011) , and they fail to say anything about how parsers fare on rare, but important linguistic constructions (Nivre et al., 2010) . Both criticisms rely on the intuition that some parsing errors are more important than others, and that our metrics should somehow reflect that. There are sentences that are hard to annotate because they are ambiguous, or because they contain phenomena peripheral to linguistic theory, such as punctuation, clitics, or fragments. Manning (2011) discusses similar issues for part-ofspeech tagging.",
"cite_spans": [
{
"start": 280,
"end": 303,
"text": "(Schwartz et al., 2012;",
"ref_id": "BIBREF19"
},
{
"start": 304,
"end": 326,
"text": "Tsarfaty et al., 2011)",
"ref_id": "BIBREF20"
},
{
"start": 430,
"end": 450,
"text": "(Nivre et al., 2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To measure the variable relevance of parsing errors, we present experiments with human judgment of parse output quality across five languages: Croatian, Danish, English, German, and Spanish. For the human judgments, we asked professional linguists with dependency annotation experience to judge which of two parsers produced the better parse. Our stance here is that, insofar experts are able to annotate dependency trees, they are also able to determine the quality of a predicted syntactic structure, which we can in turn use to evaluate parser evaluation metrics. Even though downstream evaluation is critical in assessing the usefulness of parses, it also presents non-trivial challenges in choosing the appropriate downstream tasks (Elming et al., 2013) , we see human judgments as an important supplement to extrinsic evaluation.",
"cite_spans": [
{
"start": 737,
"end": 758,
"text": "(Elming et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, no prior study has analyzed the correlation between dependency parsing metrics and human judgments. For a range of other NLP tasks, metrics have been evaluated by how well they correlate with human judgments. For instance, the standard automatic metrics for certain tasks-such as BLEU in machine translation, or ROUGE-N and NIST in summarization or natural language generation-were evaluated, reaching correlation coefficients well above .80 (Papineni et al., 2002; Lin, 2004; Belz and Reiter, 2006; Callison-Burch et al., 2007) .",
"cite_spans": [
{
"start": 472,
"end": 495,
"text": "(Papineni et al., 2002;",
"ref_id": "BIBREF17"
},
{
"start": 496,
"end": 506,
"text": "Lin, 2004;",
"ref_id": "BIBREF8"
},
{
"start": 507,
"end": 529,
"text": "Belz and Reiter, 2006;",
"ref_id": "BIBREF1"
},
{
"start": 530,
"end": 558,
"text": "Callison-Burch et al., 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We find that correlations between evaluation metrics and human judgments are weaker for dependency parsing than other NLP tasks-our correlation coefficients are typically between .35 and .55-and that inter-annotator agreement is sometimes higher than human-metric agreement. Moreover, our analysis ( \u00a75) reveals that humans have a preference for attachment over labeling decisions, and that attachments closer to the root are more important. Our findings suggest that the currently employed metrics are not fully adequate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions We present i) a systematic comparison between a range of available dependency parsing metrics and their correlation with human judgments; and ii) a novel dataset 1 of 984 sentences (up to 200 sentences for each of the 5 languages) annotated with human judgments for the preferred automatically parsed dependency tree, enabling further research in this direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate seven dependency parsing metrics, described in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "2"
},
{
"text": "Given a labeled gold tree G = V, E G , l G (\u2022) and a labeled predicted tree P = V, E P , l P (\u2022) , let E \u2282 V \u00d7 V be the set of directed edges from dependents to heads, and let l : V \u00d7 V \u2192 L be the edge labeling function, with L the set of dependency labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "2"
},
{
"text": "The three most commonly used metrics are those from the CoNLL 2006-7 shared tasks (Buchholz and Marsi, 2006) : unlabeled attachment score (UAS), label accuracy (LA), both introduced by Eisner (1996) , and labeled attachment score (LAS), the pivotal dependency parsing metric introduced by Nivre et al. (2004) .",
"cite_spans": [
{
"start": 82,
"end": 108,
"text": "(Buchholz and Marsi, 2006)",
"ref_id": "BIBREF3"
},
{
"start": 185,
"end": 198,
"text": "Eisner (1996)",
"ref_id": "BIBREF5"
},
{
"start": 289,
"end": 308,
"text": "Nivre et al. (2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "2"
},
{
"text": "|{e | e \u2208 EG \u2229 EP }| |V | LAS = |{e | lG(e) = lP (e), e \u2208 EG \u2229 EP }|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UAS =",
"sec_num": null
},
{
"text": "|V | LA = |{v | v \u2208 V, lG(v, \u2022) = lP (v, \u2022)}| |V |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UAS =",
"sec_num": null
},
{
"text": "We include two further metrics-namely, labeled (LCP) and unlabeled (UCP) complete predications-to give account for the relevance of correct predicate prediction for parsing quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UAS =",
"sec_num": null
},
{
"text": "LCP is inspired by the complete predicates metric from the SemEval 2015 shared task on semantic parsing (Oepen et al., 2015 ). 2 LCP is triggered by a verb (i.e., set of nodes V verb ) and checks whether all its core arguments match, i.e., all outgoing dependency edges except for punctuation. Since LCP is a very strict metric, we also evaluate UCP, its unlabeled variant. Given a function c X (v) that retrieves the set of child nodes of a node v from a tree X, we first define UCP as follows, and then incorporate the label matching for LCP:",
"cite_spans": [
{
"start": 104,
"end": 123,
"text": "(Oepen et al., 2015",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "UAS =",
"sec_num": null
},
{
"text": "UCP = |{v | V verb , cG(v) = cP (v)}| |V verb | LCP = |{v | V verb , cG(v) = cP (v) \u2227 lG(v, \u2022) = lP (v, \u2022)}| |V verb |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UAS =",
"sec_num": null
},
{
"text": "For the final figure of seven different parsing metrics, on top of the previous five, in our experiments we also include the neutral edge direction metric (NED) (Schwartz et al., 2011) , and tree edit distance (TED) (Tsarfaty et al., 2011; Tsarfaty et al., 2012) . 3",
"cite_spans": [
{
"start": 161,
"end": 184,
"text": "(Schwartz et al., 2011)",
"ref_id": "BIBREF18"
},
{
"start": 216,
"end": 239,
"text": "(Tsarfaty et al., 2011;",
"ref_id": "BIBREF20"
},
{
"start": 240,
"end": 262,
"text": "Tsarfaty et al., 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "UAS =",
"sec_num": null
},
{
"text": "In our analysis, we compare the metrics with human judgments. We examine how well the automatic metrics correlate with each other, as well as with human judgments, and whether interannotator agreement exceeds annotator-metric agreement. Table 1 : Data characteristics and agreement statistics. TD: tree depth; SL: sentence length.",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 244,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "3"
},
{
"text": "Data In our experiments we use data from five languages: The English (en), German (de) and Spanish (es) treebanks from the Universal Dependencies (UD v1.0) project (Nivre et al., 2015) , the Copenhagen Dependency Treebank (da) (Buch-Kromann, 2003) , and the Croatian Dependency Treebank (hr) (Agi\u0107 and Merkler, 2013) . We keep the original POS tags for all datasets (17 tags in case of UD, 13 tags for Croatian, and 23 for Danish). Data characteristics are in Table 1 . For the parsing systems, we follow McDonald and Nivre (2007) and use the second order MST (McDonald et al., 2005) , as well as Malt parser with pseudo-projectivization (Nivre and Nilsson, 2005) and default parameters. For each language, we train the parsers on the canonical training section. We randomly select 200 sentences from the test sections, where our two de- pendency parsers do not agree on the correct analysis, after removing punctuation. 4 We do not control for predicted trees matching the gold standard.",
"cite_spans": [
{
"start": 164,
"end": 184,
"text": "(Nivre et al., 2015)",
"ref_id": null
},
{
"start": 227,
"end": 247,
"text": "(Buch-Kromann, 2003)",
"ref_id": "BIBREF2"
},
{
"start": 292,
"end": 316,
"text": "(Agi\u0107 and Merkler, 2013)",
"ref_id": "BIBREF0"
},
{
"start": 518,
"end": 530,
"text": "Nivre (2007)",
"ref_id": "BIBREF10"
},
{
"start": 556,
"end": 583,
"text": "MST (McDonald et al., 2005)",
"ref_id": null
},
{
"start": 638,
"end": 663,
"text": "(Nivre and Nilsson, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 921,
"end": 922,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 460,
"end": 467,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "3"
},
{
"text": "Annotation task A total of 7 annotators were involved in the annotation task. All the annotators are either native or fluent speakers, and wellversed in dependency syntax analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "3"
},
{
"text": "For each language, we present the selected 200 sentences with their two predicted dependency structures to 2-4 annotators and ask them to rank which of the two parses is better. They see graphical representations of the two dependency structures, visualized with the What's Wrong With My NLP? tool. 5 The annotators were not informed of what parser produced which tree, nor had they access to the gold standard. The dataset of 984 sentences is available at: https://bitbucket.org/lowlands/ release (folder CoNLL2015).",
"cite_spans": [
{
"start": 299,
"end": 300,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "3"
},
{
"text": "First, we perform a standard evaluation in order to see how the parsers fare, using our range of dependency evaluation measures. In addition, we compute correlations between metrics to assess their similarity. Finally, we correlate the measures with human judgements, and compare average annotator and human-system agreements. Table 2 presents the parsing performances with respect to the set of metrics. We see that using LAS, Malt performs better on English, while MST performs better on the remaining four languages. Table 4 : Correlations between human judgments and metrics (micro avg). * means significantly different from LAS \u03c1 using Fisher's z-transform. Bold: highest correlation per language.",
"cite_spans": [],
"ref_spans": [
{
"start": 327,
"end": 334,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 520,
"end": 527,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "correlated, e.g., LAS and LA, and UAS and NED, but some exhibit very low correlation coefficients.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Next we study correlations with human judgments (Table 4) . In order to aggregate over the annotations, we use an item-response model (Hovy et al., 2013) . The correlations are relatively weak compared to similar findings for other NLP tasks. For instance, ROUGE-1 (Lin, 2004) correlates strongly with perceived summary quality, with a coefficient of 0.99. The same holds for BLEU and human judgments of machine translation quality (Papineni et al., 2002) .",
"cite_spans": [
{
"start": 134,
"end": 153,
"text": "(Hovy et al., 2013)",
"ref_id": "BIBREF7"
},
{
"start": 265,
"end": 276,
"text": "(Lin, 2004)",
"ref_id": "BIBREF8"
},
{
"start": 432,
"end": 455,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 48,
"end": 57,
"text": "(Table 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We find that, overall, LAS is the metric that correlates best with human judgments. It is closely followed by UAS, which does not differ significantly from LAS, albeit the correlations for UAS are slightly lower on average. NED is in turn highly correlated with UAS. The correlations for the predicate-based measures (LCP, UCP) are the lowest, as they are presumably too strict, and very different to LAS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Motivated by the fact that people prefer the parse that gets the overall structure right ( \u00a75), we experimented with weighting edges proportionally to their log-distance to root. However, the signal was fairly weak; the correlations were only slightly higher for English and Danish: .552 and .338, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Finally, we compare the mean agreement be- Table 5 : Average mean agreement between annotators, and between annotators and metrics.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 50,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "tween humans with the mean agreement between humans and standard metrics, cf. Table 5 . For two languages (English and Croatian), humans agree more with each other than with the standard metrics, suggesting that metrics are not fully adequate.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 85,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The mean agreement between humans is .728 for English, with slightly lower scores for the metrics (LAS: .715, UAS: .705, NED: .660). The difference between mean agreement of annotators and human-metric was higher for Croatian: .80 vs .755. For Danish, German and Spanish, however, average agreement between metrics and human judgments is higher than our inter-annotator agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "In sum, our experiments show that metrics correlate relatively weakly with human judgments, suggesting that some errors are more important to humans than others, and that the relevance of these errors are not captured by the metrics. To better understand this, we first consider the POS-wise correlations between human judgments and LAS, cf. Table 6 . In English, for example, the correlation between judgments and LAS is significantly stronger for content words 6 (\u03c1 c = 0.522) than for function words (\u03c1 f = 0.175). This also holds for the other UD languages, namely German (\u03c1 c = 0.423 vs \u03c1 f = 0.263) and Spanish (\u03c1 c = 0.403 vs \u03c1 f = 0.228). This is not the case for the non-UD languages, Croatian and Danish, where the difference between content-POS and function-POS correlations is not significantly different. In Danish, function words head nouns, and are thus more important than in UD, where content-content word relations are annotated, and function words are leaves in the dependency tree. This difference in dependency formalism is shown by the higher correlation for \u03c1 f for Danish.",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 349,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "The greater correlation for content words for English, German and Spanish suggests that errors Table 6 : Correlations between human judgements and POS-wise LAS (content \u03c1 c vs function \u03c1 f poswise LAS correlations).",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "in attaching or labeling content words mean more to human judges than errors in attaching or labeling function words. We also observe that longer sentences do not compromise annotation quality, with a \u03c1 between \u22120.07 and 0.08 across languages regarding sentence length and agreement. For the languages for which we had 4 annotators, we analyzed the subset of trees where humans and system (by LAS) disagreed, but where there was majority vote for one tree. We obtained 35 dependency instances for English and 27 for Spanish (cf. Table 7 ). Two of the authors determined whether humans preferred labeling over attachment, or otherwise. attachment labeling items en 86% 14% 35 es 67% 33% 27 Table 7 : Preference of attachment or labeling for items where humans and system disagreed and human agreement \u2265 0.75. Table 7 shows that there is a prevalent preference for attachment over labeling for both languages. For Spanish, there is proportionally higher label preference.",
"cite_spans": [],
"ref_spans": [
{
"start": 529,
"end": 536,
"text": "Table 7",
"ref_id": null
},
{
"start": 689,
"end": 696,
"text": "Table 7",
"ref_id": null
},
{
"start": 808,
"end": 815,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Out of the attachment preferences, 36% and 28% were related to root/main predicate attachments, for English and Spanish respectively. The relevance of the rootattachment preference indicates that attachment is more important than labeling for our annotators. Figure 5 provides three examples from the data where human and system disagree. Parse i) involves a coordination as well as a (local) adverbial, where humans voted for correct coordination (red) and thus unanimously preferred attachment over labeling. Yet, LAS was higher for the analysis in blue because \"certainly\" is attached to \"Europeans\" in the gold standard. Parse ii) is another example where humans preferred attachment (in this case root attachment), while iii) shows a Spanish example (\"waiter is needed\") where the subject label (nsubj) of \"camarero\" (\"waiter\") was the decisive trait.",
"cite_spans": [],
"ref_spans": [
{
"start": 259,
"end": 267,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Parsing metrics are sensitive to the choice of annotation scheme (Schwartz et al., 2012; Tsarfaty et al., 2011) and fail to capture how parsers fare on important linguistic constructions (Nivre et al., 2010) . In other NLP tasks, several studies have examined how metrics correlate with human judgments, including machine translation, summarization and natural language generation (Papineni et al., 2002; Lin, 2004; Belz and Reiter, 2006; Callison-Burch et al., 2007) . Our study is the first to assess the correlation of human judgments and dependency parsing metrics. While previous studies reached correlation coefficients over 0.80, this is not the case for dependency parsing, where we observe much lower coefficients.",
"cite_spans": [
{
"start": 65,
"end": 88,
"text": "(Schwartz et al., 2012;",
"ref_id": "BIBREF19"
},
{
"start": 89,
"end": 111,
"text": "Tsarfaty et al., 2011)",
"ref_id": "BIBREF20"
},
{
"start": 187,
"end": 207,
"text": "(Nivre et al., 2010)",
"ref_id": "BIBREF14"
},
{
"start": 381,
"end": 404,
"text": "(Papineni et al., 2002;",
"ref_id": "BIBREF17"
},
{
"start": 405,
"end": 415,
"text": "Lin, 2004;",
"ref_id": "BIBREF8"
},
{
"start": 416,
"end": 438,
"text": "Belz and Reiter, 2006;",
"ref_id": "BIBREF1"
},
{
"start": 439,
"end": 467,
"text": "Callison-Burch et al., 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We have shown that out of seven metrics, LAS correlates best with human jugdments. Nevertheless, our study shows that there is an amount of human preference that is not captured with LAS. Our analysis on human versus system disagreement indicates that attachment is more important than labeling, and that humans prefer a parse that gets the overall structure right. For some languages, inter-annotator agreement is higher than annotator-metric (LAS) agreement, and content-POS is more important than function-POS, indicating there is an amount of human preference that is not captured with our current metrics. These observations raise the important question on how to incorporate our observations into parsing metrics that provide a better fit to human judgments. We do not propose a better metric here, but simply show that while LAS seems to be the most adequate metric, there is still a need for better metrics to complement downstream evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "We outline a number of extensions for future research. Among those, we would aim at augmenting the annotations by obtaining more detailed judgments from human annotators. The current evaluation would ideally encompass more (diverse) domains and languages, as well as the many diverse annotation schemes implemented in various publicly available dependency treebanks that were not included in our experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The dataset is publicly available at https:// bitbucket.org/lowlands/release 2 http://alt.qcri.org/semeval2015/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.tsarfaty.com/unipar/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For Spanish, we had fewer analyses where the two parsers disagreed, i.e., 184.5 https://code.google.com/p/whatswrong/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Muntsa Padr\u00f3 and Miguel Ballesteros for their help and the three anonymous reviewers for their valuable feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Three Syntactic Formalisms for Data-Driven Dependency Parsing of Croatian",
"authors": [
{
"first": "Zeljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Danijela",
"middle": [],
"last": "Merkler",
"suffix": ""
}
],
"year": 2013,
"venue": "Text, Speech, and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeljko Agi\u0107 and Danijela Merkler. 2013. Three Syntactic Formalisms for Data-Driven Dependency Parsing of Croatian. In Text, Speech, and Dialogue. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Comparing Automatic and Human Evaluation of NLG Systems",
"authors": [
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 2006,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anja Belz and Ehud Reiter. 2006. Comparing Auto- matic and Human Evaluation of NLG Systems. In EACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Danish Dependency Treebank and the DTAG Treebank Tool",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Buch-Kromann",
"suffix": ""
}
],
"year": 2003,
"venue": "TLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Buch-Kromann. 2003. The Danish Depen- dency Treebank and the DTAG Treebank Tool. In TLT.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "CoNLL-X Shared Task on Multilingual Dependency Parsing",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Buchholz",
"suffix": ""
},
{
"first": "Erwin",
"middle": [],
"last": "Marsi",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X Shared Task on Multilingual Dependency Parsing. In CoNLL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "meta-) evaluation of machine translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Burch",
"suffix": ""
},
{
"first": "Cameron",
"middle": [],
"last": "Fordyce",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Schroeder",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (meta-) evaluation of machine translation. In Pro- ceedings of the Second Workshop on Statistical Ma- chine Translation.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Three new probabilistic models for dependency parsing",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 1996. Three new probabilistic models for dependency parsing. In COLING.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Down-stream effects of tree-to-dependency conversions",
"authors": [
{
"first": "Jakob",
"middle": [],
"last": "Elming",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Johannsen",
"suffix": ""
},
{
"first": "Sigrid",
"middle": [],
"last": "Klerke",
"suffix": ""
},
{
"first": "Emanuele",
"middle": [],
"last": "Lapponi",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "H\u00e9ctor Mart\u00ednez Alonso",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2013,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jakob Elming, Anders Johannsen, Sigrid Klerke, Emanuele Lapponi, H\u00e9ctor Mart\u00ednez Alonso, and Anders S\u00f8gaard. 2013. Down-stream effects of tree-to-dependency conversions. In NAACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning whom to trust with MACE",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2013,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In NAACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "ROUGE: a package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text summarization branches out: Proceedings of the ACL-04 workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: a package for auto- matic evaluation of summaries. In Text summariza- tion branches out: Proceedings of the ACL-04 work- shop.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Part-of-speech tagging from 97% to 100%: is it time for some linguistics?",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning. 2011. Part-of-speech tag- ging from 97% to 100%: is it time for some linguis- tics? In Computational Linguistics and Intelligent Text Processing. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Characterizing the errors of data-driven dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald and Joakim Nivre. 2007. Character- izing the errors of data-driven dependency parsers. In EMNLP-CoNLL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of de- pendency parsers. In ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Pseudoprojective dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Jens Nilsson. 2005. Pseudo- projective dependency parsing. In ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Memory-based dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2004. Memory-based dependency parsing. In CoNLL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Evaluation of dependency parsers on unbounded dependencies",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Gomez-Rodriguez",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Laura Rimell, Ryan McDonald, and Car- los Gomez-Rodriguez. 2010. Evaluation of depen- dency parsers on unbounded dependencies. In COL- ING.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semeval 2015 task 18: Broad-coverage semantic dependency parsing",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "Silvie",
"middle": [],
"last": "Cinkova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkova, Dan Flickinger, Jan Hajic, and Zdenka Uresova. 2015. Semeval 2015 task 18: Broad-coverage semantic dependency pars- ing. In Proceedings of the 9th International Work- shop on Semantic Evaluation (SemEval 2015).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukus",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "ACL, Philadelphia",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukus, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL, Philadel- phia, Pennsylvania.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neutralizing linguistically problematic annotations in unsupervised dependency parsing evaluation",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Omri Abend, Roi Reichart, and Ari Rappoport. 2011. Neutralizing linguistically prob- lematic annotations in unsupervised dependency parsing evaluation. In ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learnability-based syntactic annotation design",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2012,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Omri Abend, and Ari Rappoport. 2012. Learnability-based syntactic annotation design. In COLING.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Evaluating dependency parsing: robust and heuristics-free cross-annotation evaluation",
"authors": [
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Evelina",
"middle": [],
"last": "Andersson",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reut Tsarfaty, Joakim Nivre, and Evelina Anders- son. 2011. Evaluating dependency parsing: robust and heuristics-free cross-annotation evaluation. In EMNLP.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Cross-framework evaluation for statistical parsing",
"authors": [
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Evelina",
"middle": [],
"last": "Andersson",
"suffix": ""
}
],
"year": 2012,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reut Tsarfaty, Joakim Nivre, and Evelina Andersson. 2012. Cross-framework evaluation for statistical parsing. In EACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Examples where human and system (LAS) disagree. Human choice: i) red; ii) red; iii) blue.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": "82.31 87.88 84.34 85.20 41.27 47.17 MST 78.30 82.91 86.80 84.72 83.49 36.05 45.58 es Malt 78.72 82.85 87.34 82.90 84.20 34.00 43.00 MST 79.51 84.97 86.95 85.00 83.16 31.83 44.00 da Malt 79.28 83.40 85.92 83.39 77.50 47.69 55.23 MST 82.75 87.00 88.42 87.01 78.39 52.31 62.31 de Malt 69.09 75.70 82.05 75.54 80.37 19.72 30.45 MST 72.07 80.29 82.22 80.13 78.94 19.38 33.22 hr Malt 63.21 72.34 76.66 71.94 71.64 23.18 31.03 MST 65.98 76.20 79.01 75.89 72.82 24.71 34.29 Avg Malt 73.84 79.32 83.97 76.62 79.78 33.17 43.18 MST 75.72 82.27 84.68 82.55 79.36 32.86 44.08",
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">LANG PARSER LAS UAS</td><td>LA</td><td>NED TED LCP UCP</td></tr><tr><td>en</td><td>Malt</td><td>79.17</td></tr></table>",
"num": null,
"html": null
},
"TABREF2": {
"text": "Parsing performance of Malt and MST.",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF3": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>presents Spearman's \u03c1 between metrics</td></tr><tr><td>across the 5 languages. Some metrics are strongly</td></tr></table>",
"num": null,
"html": null
},
"TABREF4": {
"text": "Correlations between metrics.",
"type_str": "table",
"content": "<table><tr><td>\u03c1</td><td>en</td><td>es</td><td>da</td><td>de</td><td>hr</td><td>All</td></tr><tr><td colspan=\"2\">LAS .547</td><td>.478</td><td colspan=\"2\">.297 .466</td><td>.540</td><td>.457</td></tr><tr><td colspan=\"2\">UAS .541</td><td>.437</td><td colspan=\"2\">.331 .453</td><td>.397</td><td>.425</td></tr><tr><td>LA</td><td colspan=\"4\">.387* .250* .232 .310</td><td>.467</td><td>.324*</td></tr><tr><td colspan=\"2\">NED .541</td><td>.469</td><td colspan=\"2\">.318 .501</td><td>.446</td><td>.448</td></tr><tr><td colspan=\"3\">TED .372* .404</td><td colspan=\"2\">.323 .331</td><td colspan=\"2\">.405* .361*</td></tr><tr><td>LCP</td><td colspan=\"6\">.022* .230* .171 .120* .120* .126*</td></tr><tr><td colspan=\"7\">UCP .249* .195* .223 .190* .143* .195*</td></tr></table>",
"num": null,
"html": null
},
"TABREF5": {
"text": "ANN LAS UAS LA NED TED LCP UCP da .768 .838 .848 .808 .828 .828 .745 .765 de .670 .710 .690 .635 .710 .630 .575 .565 en .728 .715 .705 .660 .700 .658 .525 .600 es .601 .663 .644 .603 .652 .635 .581 .554 hr .800 .755 .700 .735 .730 .705 .570 .580",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF6": {
"text": "Tagged as ADJ, NOUN, PROPN, VERB.",
"type_str": "table",
"content": "<table><tr><td>\u03c1</td><td colspan=\"2\">content function</td></tr><tr><td colspan=\"2\">en .522</td><td>.175</td></tr><tr><td colspan=\"2\">de .423</td><td>.263</td></tr><tr><td colspan=\"2\">es .403</td><td>.228</td></tr><tr><td colspan=\"2\">da .148</td><td>.173</td></tr><tr><td colspan=\"2\">hr .340</td><td>.306</td></tr></table>",
"num": null,
"html": null
}
}
}
}