Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D14-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:53:30.673870Z"
},
"title": "Testing for Significance of Increased Correlation with Human Judgment",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic metrics are widely used in machine translation as a substitute for human assessment. With the introduction of any new metric comes the question of just how well that metric mimics human assessment of translation quality. This is often measured by correlation with human judgment. Significance tests are generally not used to establish whether improvements over existing methods such as BLEU are statistically significant or have occurred simply by chance, however. In this paper, we introduce a significance test for comparing correlations of two metrics, along with an open-source implementation of the test. When applied to a range of metrics across seven language pairs, tests show that for a high proportion of metrics, there is insufficient evidence to conclude significant improvement over BLEU.",
"pdf_parse": {
"paper_id": "D14-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic metrics are widely used in machine translation as a substitute for human assessment. With the introduction of any new metric comes the question of just how well that metric mimics human assessment of translation quality. This is often measured by correlation with human judgment. Significance tests are generally not used to establish whether improvements over existing methods such as BLEU are statistically significant or have occurred simply by chance, however. In this paper, we introduce a significance test for comparing correlations of two metrics, along with an open-source implementation of the test. When applied to a range of metrics across seven language pairs, tests show that for a high proportion of metrics, there is insufficient evidence to conclude significant improvement over BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Within machine translation (MT), efforts are ongoing to improve evaluation metrics and find better ways to automatically assess translation quality. The process of validating a new metric involves demonstration that it correlates better with human judgment than a standard metric such as BLEU (Papineni et al., 2001 ). However, although it is standard practice in MT evaluation to measure increases in automatic metric scores with significance tests (Germann, 2003; Och, 2003; Kumar and Byrne, 2004; Koehn, 2004; Riezler and Maxwell, 2005; Graham et al., 2014) , this has not been the case in papers proposing new metrics. Thus it is possible that some reported improvements in correlation with human judgment are attributable to chance rather than a systematic improvement.",
"cite_spans": [
{
"start": 293,
"end": 315,
"text": "(Papineni et al., 2001",
"ref_id": "BIBREF12"
},
{
"start": 450,
"end": 465,
"text": "(Germann, 2003;",
"ref_id": "BIBREF4"
},
{
"start": 466,
"end": 476,
"text": "Och, 2003;",
"ref_id": "BIBREF11"
},
{
"start": 477,
"end": 499,
"text": "Kumar and Byrne, 2004;",
"ref_id": "BIBREF7"
},
{
"start": 500,
"end": 512,
"text": "Koehn, 2004;",
"ref_id": "BIBREF6"
},
{
"start": 513,
"end": 539,
"text": "Riezler and Maxwell, 2005;",
"ref_id": "BIBREF14"
},
{
"start": 540,
"end": 560,
"text": "Graham et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we motivate and introduce a novel significance test to assess the statistical significance of differences in correlation with human judgment for pairs of automatic metrics. We apply tests to the WMT-12 shared metrics task to compare each of the participating methods, and find that for a high proportion of metrics, there is not enough evidence to conclude that they significantly outperform BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A common means of assessing automatic MT evaluation metrics is Spearman's rank correlation with human judgments (Melamed et al., 2003) , which measures the relative degree of monotonicity between the metric and human scores in the range [\u22121, 1]. The standard justification for calculating correlations over ranks rather than raw scores is to: (a) reduce anomalies due to absolute score differences; and (b) focus evaluation on what is generally the primary area of interest, namely the ranking of systems/translations.",
"cite_spans": [
{
"start": 112,
"end": 134,
"text": "(Melamed et al., 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation with Human Judgment",
"sec_num": "2"
},
{
"text": "An alternative means of evaluation is Pearson's correlation, which measures the linear correlation between a metric and human scores (Leusch et al., 2003) . Debate on the relative merits of Spearman's and Pearson's correlation for the evaluation of automatic metrics is ongoing, but there is an increasing trend towards Pearson's correlation, e.g. in the recent WMT-14 shared metrics task. Figure 1 presents the system-level results for two evaluation metrics -AMBER (Chen et al., 2012) and TERRORCAT (Fishel et al., 2012) -over the WMT-12 Spanish-to-English metrics task. These two metrics achieved the joint-highest rank correlation (\u03c1 = 0.965) for the task, but differ greatly in terms of Pearson's correlation (r = 0.881 vs. 0.971, resp.). The largest contributor to this artifact is the system with the lowest human score, represented by the leftmost point in both plots. Consistent with the WMT-14 metrics shared task, we argue that Pearson's correlation is more sensitive than Spearman's correlation. There is still the question, however, of whether an observed difference in Pearson's r is statistically significant, which we address in the next section.",
"cite_spans": [
{
"start": 133,
"end": 154,
"text": "(Leusch et al., 2003)",
"ref_id": "BIBREF8"
},
{
"start": 467,
"end": 486,
"text": "(Chen et al., 2012)",
"ref_id": "BIBREF1"
},
{
"start": 501,
"end": 522,
"text": "(Fishel et al., 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 390,
"end": 398,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Correlation with Human Judgment",
"sec_num": "2"
},
{
"text": "Evaluation of a new automatic metric, M new , commonly takes the form of quantifying the correlation between the new metric and human judgment, r(M new , H), and contrasting it with the correlation for some baseline metric, r(M base , H). It is very rare in the MT literature for significance testing to be performed in such cases, however. We introduce a statistical test which can be used for this purpose, and apply the test to the evaluation of metrics participating in the WMT-12 metric evaluation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Significance Testing",
"sec_num": "3"
},
{
"text": "At first gloss, it might seem reasonable to perform significance testing in the following manner when an increase in correlation with human assessment is observed: apply a significance test separately to the correlation of each metric with human judgment, with the hope that the newly proposed metric will achieve a significant correlation where the baseline metric does not. However, besides the fact that the correlation between almost any document-level metric and human judgment will generally be significantly greater than zero, the logic here is flawed: the fact that one correlation is significantly higher than zero (r(M new , H)) and that of another is not, does not necessarily mean that the difference between the two correlations is significant. Instead, a specific test should be applied to the difference in correlations on the data. For this same reason, confidence intervals for individual correlations with human judgment are also not particularly meaningful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Significance Testing",
"sec_num": "3"
},
{
"text": "In psychological studies, it is often the case that samples that data are drawn from are independent, and differences in correlations are computed on independent data sets. In such cases, the Fisher r to z transformation is applied to test for significant differences in correlations. In the case of automatic metric evaluation, however, the data sets used are almost never independent. This means that if r(M base , H) and r(M new , H) are both > 0, the correlation between the metric scores themselves, r(M base , M new ), must also be > 0. The strength of this correlation, directly between pairs of metrics, should be taken into account using a significance test of the difference in correlation between r(M base , H) and r(M new , H).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Significance Testing",
"sec_num": "3"
},
{
"text": "Correlations computed for two separate automatic metrics on the same data set are not independent, and for this reason in order to test the difference in correlation between them, the degree to which the pair of metrics correlate with each other should be taken into account. 1959) 1 evaluates significance in a difference in dependent correlations (Steiger, 1980) . It is formulated as follows, as a test of whether the population correlation between X 1 and X 3 equals the population correlation between X 2 and X 3 :",
"cite_spans": [
{
"start": 349,
"end": 364,
"text": "(Steiger, 1980)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Correlations",
"sec_num": "3.1"
},
{
"text": "t(n \u2212 3) = (r 13 \u2212 r 23 ) (n \u2212 1)(1 + r 12 ) 2K (n\u22121) (n\u22123) + (r 23 +r 13 ) 2 4 (1 \u2212 r 12 ) 3 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Correlations",
"sec_num": "3.1"
},
{
"text": "where r ij is the Pearson correlation between X i and X j , n is the size of the population, and:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Correlations",
"sec_num": "3.1"
},
{
"text": "K = 1 \u2212 r 12 2 \u2212 r 13 2 \u2212 r 23 2 + 2r 12 r 13 r 23",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Correlations",
"sec_num": "3.1"
},
{
"text": "The Williams test is more powerful than the equivalent for independent samples (Fisher r to z), as it takes the correlations between X 1 and X 2 (metric scores) into account. All else being equal, the higher the correlation between the metric scores, the greater the statistical power of the test. Figure 2a is a heatmap of the degree to which automatic metrics correlate with one another when computed on the same data set, in the form of the Pearson's correlation between each pair of metrics that participated in the WMT-12 metrics task for Spanish-to-English evaluation. Metrics are ordered in all tables from highest to lowest correlation with human assessment. In addition, for the purposes of significance testing, we take the absolute value of all correlations, in order to compare error-based metrics with non-error based ones.",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 307,
"text": "Figure 2a",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Correlated Correlations",
"sec_num": "3.1"
},
{
"text": "In general, the correlation is high amongst all pairs of metrics, with a high proportion of paired metrics achieving a correlation in excess of r = 0.9. Two exceptions to this are TERRORCAT (Fishel et al., 2012) and SAGAN (Castillo and Estrella, 2012) , as seen in the regions of yellow and white. Figure 2b shows the results of Williams significance tests for all pairs of metrics. Since we are interested in not only identifying significant differences in correlations, but ultimately ranking competing metrics, we use a one-sided test. Here again, the metrics are ordered from highest to lowest (absolute) correlation with human judgment.",
"cite_spans": [
{
"start": 190,
"end": 211,
"text": "(Fishel et al., 2012)",
"ref_id": "BIBREF3"
},
{
"start": 222,
"end": 251,
"text": "(Castillo and Estrella, 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 298,
"end": 307,
"text": "Figure 2b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "4"
},
{
"text": "For the Spanish-to-English systems, approximately 60% of WMT-12 metric pairs show a significant difference in correlation with human judgment at p < 0.05 (for one of the two metric directions). 2 As expected, the higher the correlation with human judgment, the more metrics a given method is superior to at a level of statistical significance. Although TERRORCAT (Fishel et al., 2012) achieves the highest absolute correlation with human judgment, it is not significantly better (p \u2265 0.05) than the four next-best metrics (METEOR (Denkowski and Lavie, 2011) , SAGAN (Castillo and Estrella, 2012) jar, 2011) and POSF (Popovic, 2012) ). There is not enough evidence to conclude, therefore, that this metric is any better at evaluating Spanish-to-English MT system quality than the next four metrics. Figure 3 shows the results of significance tests for the six other language pairs used in the WMT-12 metrics shared task. 3 For no language pair is there an outright winner amongst the metrics, with proportions of significant differences between metrics for a given language pair ranging from 3% for Czech-to-English to 82% for Englishto-French (p < 0.05). The number of metrics that significantly outperform BLEU for a given language pair is only 34% (p < 0.05), and no method significantly outperforms BLEU over all language pairs -indeed, even the best methods achieve statistical significance over BLEU for only a small minority of language pairs. This underlines the dangers of assessing metrics based solely on correlation numbers, and emphasizes the importance of statistical testing.",
"cite_spans": [
{
"start": 363,
"end": 384,
"text": "(Fishel et al., 2012)",
"ref_id": "BIBREF3"
},
{
"start": 530,
"end": 557,
"text": "(Denkowski and Lavie, 2011)",
"ref_id": "BIBREF2"
},
{
"start": 566,
"end": 595,
"text": "(Castillo and Estrella, 2012)",
"ref_id": "BIBREF0"
},
{
"start": 616,
"end": 631,
"text": "(Popovic, 2012)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 798,
"end": 806,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "4"
},
{
"text": "It is important to note that the number of com-peting metrics a metric significantly outperforms should not be used as the criterion for ranking competing metrics. This is due to the fact that the power of the Williams test to identify significant differences between correlations changes depending on the degree to which the pair of metrics correlate with each other. Therefore, a metric that happens to correlate strongly with many other metrics would be at an unfair advantage, were numbers of significant wins to be used to rank metrics. For this reason, it is best to interpret pairwise metric tests in isolation. As part of this research, we have made available an open-source implementation of statistical tests tailored to the assessment of MT metrics available at https://github.com/ ygraham/significance-williams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "4"
},
{
"text": "We have provided an analysis of current methodologies for evaluating automatic metrics in machine translation, and identified an issue with respect to the lack of significance testing. We introduced the Williams test as a means of calculating the statistical significance of differences in correlations for dependent samples. Analysis of statistical significance in the WMT-12 metrics shared task showed there is currently insufficient evidence for a high proportion of metrics to conclude that they outperform BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Also sometimes referred to as the Hotelling-Williams test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Correlation matrices (red) are maximally filled, in contrast to one-sided significance test matrices (green), where, at a maximum, fewer than half of the cells can be filled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We omit English-to-Czech due to some metric scores being omitted from the WMT-12 data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We wish to thank the anonymous reviewers for their valuable comments. This research was supported by funding from the Australian Research Council.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semantic textual similarity for MT evaluation",
"authors": [
{
"first": "Julio",
"middle": [],
"last": "Castillo",
"suffix": ""
},
{
"first": "Paula",
"middle": [],
"last": "Estrella",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "52--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julio Castillo and Paula Estrella. 2012. Semantic tex- tual similarity for MT evaluation. In Proceedings of the Seventh Workshop on Statistical Machine Trans- lation, pages 52-58, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improving AMBER, an MT evaluation metric",
"authors": [
{
"first": "Boxing",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Kuhn",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "59--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boxing Chen, Roland Kuhn, and George Foster. 2012. Improving AMBER, an MT evaluation metric. In Proceedings of the Seventh Workshop on Statisti- cal Machine Translation, pages 59-63, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "85--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Pro- ceedings of the Sixth Workshop on Statistical Ma- chine Translation, pages 85-91, Edinburgh, UK.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "TerrorCat: a translation error categorization-based MT quality metric",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "64--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Fishel, Rico Sennrich, Maja Popovi\u0107, and Ond\u0159ej Bojar. 2012. TerrorCat: a translation error categorization-based MT quality metric. In Pro- ceedings of the Seventh Workshop on Statistical Ma- chine Translation, pages 64-70, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Greedy decoding for statistical machine translation in almost linear time",
"authors": [
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Assoc",
"volume": "1",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulrich Germann. 2003. Greedy decoding for statis- tical machine translation in almost linear time. In Proceedings of the 2003 Conference of the North American Chapter of the Assoc. Computational Lin- guistics on Human Language Technology-Volume 1, pages 1-8, Edmonton, Canada.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Randomized significance tests in machine translation",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Nitika",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the ACL 2014 Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "266--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Nitika Mathur, and Timothy Baldwin. 2014. Randomized significance tests in machine translation. In Proceedings of the ACL 2014 Ninth Workshop on Statistical Machine Translation, pages 266-274, Baltimore, USA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of Empirical Methods in Natural Language Processing 2004 (EMNLP 2004), pages 388-395, Barcelona, Spain.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Minimum Bayes-risk decoding for statistical machine translation",
"authors": [
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 4th International Conference on Human Language Technology Research and 5th Annual Meeting of the NAACL (HLT-NAACL 2004)",
"volume": "",
"issue": "",
"pages": "169--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shankar Kumar and William Byrne. 2004. Minimum Bayes-risk decoding for statistical machine transla- tion. In Proceedings of the 4th International Con- ference on Human Language Technology Research and 5th Annual Meeting of the NAACL (HLT-NAACL 2004), pages 169-176, Boston, USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A novel string-to-string distance measure with applications to machine translation evaluation",
"authors": [
{
"first": "Gregor",
"middle": [],
"last": "Leusch",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Ueffing",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings 9th Machine Translation Summit (MT Summit IX)",
"volume": "",
"issue": "",
"pages": "240--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregor Leusch, Nicola Ueffing, and Hermann Ney. 2003. A novel string-to-string distance measure with applications to machine translation evaluation. In Proceedings 9th Machine Translation Summit (MT Summit IX), pages 240-247, New Orleans, USA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Approximating a deep-syntactic metric for MT evaluation and tuning",
"authors": [
{
"first": "Matou\u0161",
"middle": [],
"last": "Mach\u00e1\u010dek",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "92--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matou\u0161 Mach\u00e1\u010dek and Ond\u0159ej Bojar. 2011. Approx- imating a deep-syntactic metric for MT evaluation and tuning. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 92-98, Edin- burgh, UK.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Precision and recall of machine translation",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (HLT-NAACL 2003) -Short Papers",
"volume": "",
"issue": "",
"pages": "61--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Melamed, Ryan Green, and Joseph Turian. 2003. Precision and recall of machine translation. In Pro- ceedings of the 2003 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics on Human Language Technology (HLT- NAACL 2003) -Short Papers, pages 61-63, Ed- monton, Canada.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate train- ing in statistical machine translation. In Proceed- ings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160-167, Sap- poro, Japan.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BLEU: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2001. BLEU: A method for automatic evaluation of machine translation. Technical Report RC22176 (W0109-022), IBM Research, Thomas J. Watson Research Center.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Class error rates for evaluation of machine translation output",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovic",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "71--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maja Popovic. 2012. Class error rates for evaluation of machine translation output. In Proceedings of the Seventh Workshop on Statistical Machine Transla- tion, pages 71-75, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "On some pitfalls in automatic evaluation and significance testing for mt",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "John",
"middle": [
"T"
],
"last": "Maxwell",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization",
"volume": "",
"issue": "",
"pages": "57--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Riezler and John T. Maxwell. 2005. On some pitfalls in automatic evaluation and significance test- ing for mt. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 57-64, Ann Arbor, USA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Tests for comparing elements of a correlation matrix",
"authors": [
{
"first": "James",
"middle": [
"H"
],
"last": "Steiger",
"suffix": ""
}
],
"year": 1980,
"venue": "Psychological Bulletin",
"volume": "87",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James H. Steiger. 1980. Tests for comparing ele- ments of a correlation matrix. Psychological Bul- letin, 87(2):245.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Regression Analysis",
"authors": [
{
"first": "Evan",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
}
],
"year": 1959,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evan J. Williams. 1959. Regression Analysis, vol- ume 14. Wiley, New York, USA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Scatter plot of human and automatic scores of WMT-12 Spanish-to-English systems for two MT evaluation metrics (AMBER and TERRORCAT)",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "(a) Pearson's correlation between pairs of automatic metrics; and (b) p-value of Williams significance tests, where a colored cell in row i (named on y-axis), col j indicates that metric i (named on x-axis) correlates significantly higher with human judgment than metric j; all results are based on the WMT-12 Spanish-to-English data set.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "Significance results for pairs of automatic metrics for each WMT-12 language pair.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"html": null,
"num": null,
"text": "The Williams test(Williams,",
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>TerrorCat</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>TerrorCat</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>METEOR</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>METEOR</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Sagan</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Sagan</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Sempos</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Sempos</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>PosF</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>PosF</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>XEnErrCats</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>XEnErrCats</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>WBErrCats</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>WBErrCats</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Amber</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Amber</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>BErrCats</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>BErrCats</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>SimpBLEU</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>SimpBLEU</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>BLEU\u22124cc</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>BLEU\u22124cc</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>TER</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>TER</td></tr><tr><td>TerrorCat</td><td>METEOR</td><td>Sagan</td><td>Sempos</td><td>PosF</td><td>XEnErrCats</td><td>WBErrCats</td><td>Amber</td><td>BErrCats</td><td>SimpBLEU</td><td>BLEU.4cc</td><td>TER</td><td>TerrorCat</td><td>METEOR</td><td>Sagan</td><td>Sempos</td><td>PosF</td><td>XEnErrCats</td><td>WBErrCats</td><td>Amber</td><td>BErrCats</td><td>SimpBLEU</td><td>BLEU.4cc</td><td>TER</td></tr><tr><td colspan=\"11\">(a) Pearson's correlation</td><td/><td colspan=\"12\">(b) Statistical significance</td></tr></table>",
"type_str": "table"
}
}
}
}