Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N10-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:51:11.961112Z"
},
"title": "Extending the METEOR Machine Translation Evaluation Metric to the Phrase Level",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15232",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15232",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents METEOR-NEXT, an extended version of the METEOR metric designed to have high correlation with postediting measures of machine translation quality. We describe changes made to the metric's sentence aligner and scoring scheme as well as a method for tuning the metric's parameters to optimize correlation with humantargeted Translation Edit Rate (HTER). We then show that METEOR-NEXT improves correlation with HTER over baseline metrics, including earlier versions of METEOR, and approaches the correlation level of a state-of-theart metric, TER-plus (TERp).",
"pdf_parse": {
"paper_id": "N10-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents METEOR-NEXT, an extended version of the METEOR metric designed to have high correlation with postediting measures of machine translation quality. We describe changes made to the metric's sentence aligner and scoring scheme as well as a method for tuning the metric's parameters to optimize correlation with humantargeted Translation Edit Rate (HTER). We then show that METEOR-NEXT improves correlation with HTER over baseline metrics, including earlier versions of METEOR, and approaches the correlation level of a state-of-theart metric, TER-plus (TERp).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent focus on the need for accurate automatic metrics for evaluating the quality of machine translation output has spurred much development in the field of MT. Workshops such as WMT09 (Callison-Burch et al., 2009) and the MetricsMATR08 challenge (Przybocki et al., 2008) encourage the development of new MT metrics and reliable human judgment tasks.",
"cite_spans": [
{
"start": 186,
"end": 215,
"text": "(Callison-Burch et al., 2009)",
"ref_id": "BIBREF3"
},
{
"start": 248,
"end": 272,
"text": "(Przybocki et al., 2008)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes our work extending the ME-TEOR metric to improve correlation with humantargeted Translation Edit Rate (HTER) (Snover et al., 2006) , a semi-automatic post-editing based metric which measures the distance between MT output and a targeted reference. We identify several limitations of the original METEOR metric and describe our modifications to improve performance on this task. Our extended metric, METEOR-NEXT, is then tuned to maximize segment-level correlation with HTER scores and tested against several baseline metrics. We show that METEOR-NEXT outperforms earlier versions of METEOR when tuned to the same HTER data and approaches the performance of a state-of-the-art TER-based metric, TER-plus.",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a machine translation hypothesis and a reference translation, the traditional METEOR metric calculates a lexical similarity score based on a wordto-word alignment between the two strings (Banerjee and Lavie, 2005) . When multiple references are available, the hypothesis is scored against each and the reference producing the highest score is used. Alignments are built incrementally in a series of stages using the following METEOR matchers: Exact: Words are matched if and only if their surface forms are identical. Stem: Words are stemmed using a languageappropriate Snowball Stemmer (Porter, 2001 ) and matched if the stems are identical. Synonym: Words are matched if they are both members of a synonym set according to the Word-Net (Miller and Fellbaum, 2007) database. This matcher is limited to translations into English.",
"cite_spans": [
{
"start": 207,
"end": 219,
"text": "Lavie, 2005)",
"ref_id": "BIBREF1"
},
{
"start": 593,
"end": 606,
"text": "(Porter, 2001",
"ref_id": "BIBREF8"
},
{
"start": 744,
"end": 771,
"text": "(Miller and Fellbaum, 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional METEOR Scoring",
"sec_num": "2.1"
},
{
"text": "At each stage, one of the above matchers identifies all possible word matches between the two translations using words not aligned in previous stages. An alignment is then identified as the largest subset of these matches in which every word in each sentence aligns to zero or one words in the other sen-tence. If multiple such alignments exist, the alignment is chosen that best preserves word order by having the fewest crossing alignment links. At the end of each stage, matched words are fixed so that they are not considered in future stages. The final alignment is defined as the union of all stage alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional METEOR Scoring",
"sec_num": "2.1"
},
{
"text": "Once an alignment has been constructed, the total number of unigram matches (m), the number of words in the hypothesis (t), and the number of words in the reference (r) are used to calculate precision (P = m/t) and recall (R = m/r). The parameterized harmonic mean of P and R (van Rijsbergen, 1979) is then calculated:",
"cite_spans": [
{
"start": 281,
"end": 298,
"text": "Rijsbergen, 1979)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional METEOR Scoring",
"sec_num": "2.1"
},
{
"text": "F mean = P \u2022 R \u03b1 \u2022 P + (1 \u2212 \u03b1) \u2022 R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional METEOR Scoring",
"sec_num": "2.1"
},
{
"text": "To account for differences in word order, the minimum number of \"chunks\" (ch) is calculated where a chunk is defined as a series of matched unigrams that is contiguous and identically ordered in both sentences. The fragmentation (f rag = ch/m) is then used to calculate a fragmentation penalty:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional METEOR Scoring",
"sec_num": "2.1"
},
{
"text": "P en = \u03b3 \u2022 f rag \u03b2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional METEOR Scoring",
"sec_num": "2.1"
},
{
"text": "The final METEOR score is then calculated:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional METEOR Scoring",
"sec_num": "2.1"
},
{
"text": "Score = (1 \u2212 P en) \u2022 F mean",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional METEOR Scoring",
"sec_num": "2.1"
},
{
"text": "The free parameters \u03b1, \u03b2, and \u03b3 can be tuned to maximize correlation with various types of human judgments (Lavie and Agarwal, 2007) .",
"cite_spans": [
{
"start": 107,
"end": 132,
"text": "(Lavie and Agarwal, 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional METEOR Scoring",
"sec_num": "2.1"
},
{
"text": "Traditional METEOR is limited to unigram matches, making it strictly a word-level metric. By focusing on only one match type per stage, the aligner misses a significant part of the possible alignment space. Further, selecting partial alignments based only on the fewest number of per-stage crossing alignment links can in practice lead to missing full alignments with the same number of matches in fewer chunks. Our extended aligner addresses these limitations by introducing support for multiple-word phrase matches and considering all possible matches in a single alignment stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending the METEOR Aligner",
"sec_num": "2.2"
},
{
"text": "We introduce an additional paraphrase matcher which matches phrases (one or more successive words) if one phrase is considered a paraphrase of the other by a paraphrase database. For English, we use the paraphrase database developed by Snover et al. (2009) , using techniques presented by Bannard and Callison-Burch (2005) .",
"cite_spans": [
{
"start": 236,
"end": 256,
"text": "Snover et al. (2009)",
"ref_id": "BIBREF11"
},
{
"start": 289,
"end": 322,
"text": "Bannard and Callison-Burch (2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extending the METEOR Aligner",
"sec_num": "2.2"
},
{
"text": "The extended aligner first constructs a search space by applying all matchers in sequence to identify all possible matches between the hypothesis and reference. To reduce redundant matches, stem and synonym matches between pairs of words which have already been identified as exact matches are not considered. Matches have start positions and lengths in both sentences; a word occurring less than length positions after a match start is said to be covered by the match. As exact, stem, and synonym matches will always have length one in both sentences, they can be considered phrase matches of length one. Since other matches can cover phrases of different lengths in the two sentences, matches are now said to be one-to-one at the phrase level rather than the word level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending the METEOR Aligner",
"sec_num": "2.2"
},
{
"text": "Once all possible matches have been identified, the aligner identifies the final alignment as the largest subset of these matches meeting the following criteria in order of importance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending the METEOR Aligner",
"sec_num": "2.2"
},
{
"text": "1. Each word in each sentence is covered by zero or one matches 2. Largest number of covered words across both sentences 3. Smallest number of chunks, where a chunk is now defined as a series of matched phrases that is contiguous and identically ordered in both sentences 4. Smallest sum of absolute distances between match start positions in the two sentences (prefer to align words and phrases that occur at similar positions in both sentences)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending the METEOR Aligner",
"sec_num": "2.2"
},
{
"text": "The resulting alignment is selected from the full space of possible alignments and directly optimizes the statistics on which the the final score will be calculated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending the METEOR Aligner",
"sec_num": "2.2"
},
{
"text": "Once an alignment has been chosen, the METEOR-NEXT score is calculated using extended versions of the traditional METEOR statistics. We also introduce a tunable weight vector used to dictate the relative contribution of each match type. The extended ME-TEOR score is calculated as follows. The number of words in the hypothesis (t) and reference (r) are counted. For each of the matchers (m i ), count the number of words covered by matches of this type in the hypothesis (m i (t)) and reference (m i (r)) and apply the appropriate module weight (w i ). The weighted Precision and Recall are then calculated:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended METEOR Scoring",
"sec_num": "2.3"
},
{
"text": "P = i w i \u2022 m i (t) t R = i w i \u2022 m i (r) r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended METEOR Scoring",
"sec_num": "2.3"
},
{
"text": "The minimum number of chunks (ch) is then calculated using the new chunk definition. Once P , R, and ch are calculated, the remaining statistics and final score can be calculated as in Section 2.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended METEOR Scoring",
"sec_num": "2.3"
},
{
"text": "Human-targeted Translation Edit Rate (HTER) (Snover et al., 2006) , is a semi-automatic assessment of machine translation quality based on the number of edits required to correct translation hypotheses. A human annotator edits each MT hypothesis so that it is meaning-equivalent with a reference translation, with an emphasis on making the minimum possible number of edits. The Translation Edit Rate (TER) is then calculated using the human-edited translation as a targeted reference for the MT hypothesis. The resulting scores are shown to correlate well with other types of human judgments (Snover et al., 2006) .",
"cite_spans": [
{
"start": 44,
"end": 65,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF10"
},
{
"start": 592,
"end": 613,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning for Post-Editing Measures of Quality",
"sec_num": "3"
},
{
"text": "The GALE (Olive, 2005) Phase 2 unsequestered data includes HTER scores for multiple Arabic-to-English and Chinese-to-English MT systems. We used HTER scores for 10838 segments from 1045 documents from this data set to tune both the original METEOR and METEOR-NEXT. Both were exhaustively tuned to maximize the length-weighted segment-level Pearson's correlation with the HTER scores. This produced globally optimal \u03b1, \u03b2, and \u03b3 values for METEOR and optimal \u03b1, \u03b2, \u03b3 values plus stem, synonym, and paraphrase match weights for METEOR-NEXT (with the weight of exact matches fixed at 1). Table 1 compares the new HTER parameters to those tuned for other tasks including adequacy and fluency (Lavie and Agarwal, 2007) and ranking (Agarwal and Lavie, 2008) . As observed by Snover et al. (2009) , HTER prefers metrics which are more balanced between precision and recall: this results in the lowest values of \u03b1 for any task. Additionally, non-exact matches receive lower weights, with stem matches receiving zero weight. This reflects a weakness in HTER scoring where words with matching stems are treated as completely dissimilar, requiring full word substitutions (Snover et al., 2006) .",
"cite_spans": [
{
"start": 9,
"end": 22,
"text": "(Olive, 2005)",
"ref_id": "BIBREF6"
},
{
"start": 687,
"end": 712,
"text": "(Lavie and Agarwal, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 725,
"end": 750,
"text": "(Agarwal and Lavie, 2008)",
"ref_id": "BIBREF0"
},
{
"start": 768,
"end": 788,
"text": "Snover et al. (2009)",
"ref_id": "BIBREF11"
},
{
"start": 1160,
"end": 1181,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 584,
"end": 591,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tuning Toward HTER",
"sec_num": "3.1"
},
{
"text": "The GALE (Olive, 2005) Phase 3 unsequestered data includes HTER scores for Arabic-to-English MT output. We created a test set from HTER scores of 2245 segments from 195 documents in this data set. Our evaluation metric (METEOR-NEXT-hter) was tested against the following established metrics: BLEU (Papineni et al., 2002) with a maximum Ngram length of 4, TER (Snover et al., 2006) , versions of METEOR based on release 0.7 tuned for adequacy and fluency (METEOR-0.7-af) (Lavie and Agarwal, 2007) , ranking (METEOR-0.7-rank) (Agarwal and Lavie, 2008) , and HTER (METEOR-0.7-hter). Also included is the HTER-tuned version of TER-plus (TERp-hter), a metric with state-of-the-art performance in recent evaluations (Snover et al., 2009) . Length-weighted Pearson's and Spearman's correlation are shown for all metrics at both the segment (Table 2 ) and document level ( METEOR-NEXT-hter outperforms all baseline metrics at both the segment and document level. Bootstrap sampling indicates that the segment-level correlation improvements of 0.026 in Pearson's r and 0.019 in Spearman's \u03c1 over METEOR-0.7-hter are statistically significant at the 95% level. TERp's correlation with HTER is still significantly higher across all categories. Our metric does run significantly faster than TERp, scoring approximately 120 segments per second to TERp's 3.8.",
"cite_spans": [
{
"start": 9,
"end": 22,
"text": "(Olive, 2005)",
"ref_id": "BIBREF6"
},
{
"start": 297,
"end": 320,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF7"
},
{
"start": 359,
"end": 380,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF10"
},
{
"start": 470,
"end": 495,
"text": "(Lavie and Agarwal, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 524,
"end": 549,
"text": "(Agarwal and Lavie, 2008)",
"ref_id": "BIBREF0"
},
{
"start": 710,
"end": 731,
"text": "(Snover et al., 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 833,
"end": 841,
"text": "(Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We have presented an extended METEOR metric which shows higher correlation with HTER than baseline metrics, including traditional METEOR tuned on the same data. Our extensions are not specific to HTER tasks; improved alignments and additional features should improve performance on any task having sufficient tuning data. Although our metric does not outperform TERp, it should be noted that HTER incorporates TER alignments, providing TER-based metrics a natural advantage. Our metric also scores segments relatively quickly, making it a viable choice for tuning MT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This work was funded in part by NSF grants IIS-0534932 and IIS-0915327.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Meteor, m-bleu and m-ter: Evaluation Metrics for High-Correlation with Human Rankings of Machine Translation Output",
"authors": [
{
"first": "Abhaya",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of WMT08",
"volume": "",
"issue": "",
"pages": "115--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhaya Agarwal and Alon Lavie. 2008. Meteor, m-bleu and m-ter: Evaluation Metrics for High-Correlation with Human Rankings of Machine Translation Output. In Proc. of WMT08, pages 115-118.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Im- proved Correlation with Human Judgments. In Proc. of the ACL Workshop on Intrinsic and Extrinsic Evalu- ation Measures for Machine Translation and/or Sum- marization, pages 65-72.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Paraphrasing with bilingual parallel corpora",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Bannard",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL05",
"volume": "",
"issue": "",
"pages": "597--604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with bilingual parallel corpora. In Proc. of ACL05, pages 597-604.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Findings of the 2009 Workshop on Statistical Machine Translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Schroeder",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of WMT09",
"volume": "",
"issue": "",
"pages": "1--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, and Josh Schroeder. 2009. Findings of the 2009 Workshop on Statistical Machine Translation. In Proc. of WMT09, pages 1-28.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "METEOR: An Automatic Metric for MT Evaluation with High Levels of Correlation with Human Judgments",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Abhaya",
"middle": [],
"last": "Agarwal",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of WMT07",
"volume": "",
"issue": "",
"pages": "228--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Lavie and Abhaya Agarwal. 2007. METEOR: An Automatic Metric for MT Evaluation with High Lev- els of Correlation with Human Judgments. In Proc. of WMT07, pages 228-231.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Global Autonomous Language Exploitation (GALE). DARPA/IPTO Proposer Information Pamphlet",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Olive",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Olive. 2005. Global Autonomous Language Ex- ploitation (GALE). DARPA/IPTO Proposer Informa- tion Pamphlet.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BLEU: a Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL02",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a Method for Automatic Eval- uation of Machine Translation. In Proc. of ACL02, pages 311-318.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Snowball: A language for stemming algorithms",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Porter",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Porter. 2001. Snowball: A language for stem- ming algorithms. http://snowball.tartarus.org/texts/.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Official results of the NIST",
"authors": [
{
"first": "M",
"middle": [],
"last": "Przybocki",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Peterson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bronsart",
"suffix": ""
}
],
"year": 2008,
"venue": "Metrics for MAchine TRanslation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Przybocki, K. Peterson, and S Bronsart. 2008. Official results of the NIST 2008 \"Metrics for MAchine TRanslation\" Challenge (MetricsMATR08).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A Study of Translation Edit Rate with Targeted Human Annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of AMTA-2006",
"volume": "",
"issue": "",
"pages": "223--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annota- tion. In Proc. of AMTA-2006, pages 223-231.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Fluency, Adequacy, or HTER? Exploring Different Human Judgments with a Tunable MT Metric",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of WMT09",
"volume": "",
"issue": "",
"pages": "259--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Nitin Madnani, Bonnie Dorr, and Richard Schwartz. 2009. Fluency, Adequacy, or HTER? Exploring Different Human Judgments with a Tunable MT Metric. In Proc. of WMT09, pages 259- 268.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Information Retrieval",
"authors": [
{
"first": "C",
"middle": [],
"last": "Van Rijsbergen",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. van Rijsbergen, 1979. Information Retrieval, chap- ter 7. 2nd edition.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>). System level</td></tr></table>"
},
"TABREF2": {
"text": "Segment level correlation with HTER.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>Metric</td><td colspan=\"2\">Pearson's r Spearman's \u03c1</td></tr><tr><td>BLEU-4</td><td>-0.689</td><td>-0.686</td></tr><tr><td>TER</td><td>0.675</td><td>0.679</td></tr><tr><td>METEOR-0.7-af</td><td>-0.696</td><td>-0.699</td></tr><tr><td>METEOR-0.7-rank</td><td>-0.691</td><td>-0.693</td></tr><tr><td>METEOR-0.7-hter</td><td>-0.704</td><td>-0.705</td></tr><tr><td>METEOR-NEXT-hter</td><td>-0.719</td><td>-0.713</td></tr><tr><td>TERp-hter</td><td>0.738</td><td>0.747</td></tr></table>"
},
"TABREF3": {
"text": "Document level correlation with HTER.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>"
}
}
}
}