Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U18-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:12:08.256892Z"
},
"title": "Towards Efficient Machine Translation Evaluation by Modelling Annotators",
"authors": [
{
"first": "Nitika",
"middle": [],
"last": "Mathur",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne Victoria",
"location": {
"postCode": "3010",
"country": "Australia"
}
},
"email": "[email protected]"
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne Victoria",
"location": {
"postCode": "3010",
"country": "Australia"
}
},
"email": "[email protected]"
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne Victoria",
"location": {
"postCode": "3010",
"country": "Australia"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Current machine translation evaluations use Direct Assessment, based on crowdsourced judgements from a large pool of workers, along with quality control checks, and a robust method for combining redundant judgements. In this paper we show that the quality control mechanism is overly conservative, increasing the time and expense of the evaluation. We propose a model that does not filter workers, and takes into account varying annotator reliabilities. Our model effectively weights each worker's scores based on the inferred precision of the worker, and is much more reliable than the mean of either the raw or standardised scores.",
"pdf_parse": {
"paper_id": "U18-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "Current machine translation evaluations use Direct Assessment, based on crowdsourced judgements from a large pool of workers, along with quality control checks, and a robust method for combining redundant judgements. In this paper we show that the quality control mechanism is overly conservative, increasing the time and expense of the evaluation. We propose a model that does not filter workers, and takes into account varying annotator reliabilities. Our model effectively weights each worker's scores based on the inferred precision of the worker, and is much more reliable than the mean of either the raw or standardised scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Accurate evaluation is critical for measuring progress in machine translation (MT). Despite progress over the years, automatic metrics are still biased, and human evaluation is still a fundamental requirement for reliable evaluation. The process of collecting human annotations is timeconsuming and expensive, and the data is always noisy. The question of how to efficiently collect this data has evolved over the years, but there is still scope for improvement. Furthermore, once the data has been collected, there is no consensus on the best way to reason about translation quality. Direct Assessment (\"DA\": Graham et al. (2017) ) is currently accepted as the best practice for human evaluation, and is the official method at the Conference for Machine Translation (Bojar et al., 2017a) . Every annotator scores a set of translation-pairs, which includes quality control items designed to filter out unreliable workers.",
"cite_spans": [
{
"start": 610,
"end": 630,
"text": "Graham et al. (2017)",
"ref_id": "BIBREF8"
},
{
"start": 767,
"end": 788,
"text": "(Bojar et al., 2017a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the quality control process has low recall for good workers: as demonstrated in Section 3, about one third of good data is discarded, increasing expense. Once good workers are identified, their outputs are simply averaged to produce the final 'true' score, despite their varying accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we provide a detailed analysis of these shortcomings of DA and propose a Bayesian model to address these issues. Instead of standardising individual worker scores, our model can automatically infer worker offsets using the raw scores of all workers as input. In addition, by learning a worker-specific precision, each worker effectively has a differing magnitude of vote in the ensemble. When evaluated on the WMT 2016 Tr-En dataset which has a high proportion of unskilled annotators, these models are more efficient than the mean of the standardised scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Conference on Machine Translation (WMT) annually collects human judgements to evaluate the MT systems and metrics submitted to the shared tasks. The evaluation methodology has evolved over the years, from 5 point adequacy and fluency rating, to relative rankings (\"RR\"), to DA. With RR, annotators are asked to rank translations of 5 different MT systems. In earlier years, the final score of a system was the expected number of times its translations score better than translations by other systems (expected wins). Bayesian models like Hopkins and May (Hopkins and May, 2013) and Trueskill (Sakaguchi et al., 2014) were then proposed to learn the relative ability of the MT systems. Trueskill was adopted by WMT in 2015 as it is more stable and efficient than the expected wins heuristic.",
"cite_spans": [
{
"start": 558,
"end": 581,
"text": "(Hopkins and May, 2013)",
"ref_id": "BIBREF10"
},
{
"start": 596,
"end": 620,
"text": "(Sakaguchi et al., 2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "DA was trialled at WMT 2016 (Bojar et al., 2016a) , and has replaced RR since 2017 (Bojar et al., 2017a) . It is more scalable than RR as the number of systems increases (we need to obtain one annotation per system, instead of one annotation per system pair). Each translation is rated independently, minimising the risk of being influenced by the relative quality of other translations. Ideally, it is possible that evaluations can be compared across multiple datasets. For example, we can track the progress of MT systems for a given language pair over the years. Another probabilistic model, EASL (Sakaguchi and Van Durme, 2018) , has been proposed that combines some advantages of DA with Trueskill. Annotators score translations from 5 systems at the same time on a sliding scale, allowing users to explicitly specify the magnitude of difference between system translations. Active learning to select the systems in each comparison to increase efficiency. But it does not model worker reliability, and is, very likely, not compatible with longitudinal evaluation, as the systems are effectively scored relative to each other.",
"cite_spans": [
{
"start": 28,
"end": 49,
"text": "(Bojar et al., 2016a)",
"ref_id": null
},
{
"start": 83,
"end": 104,
"text": "(Bojar et al., 2017a)",
"ref_id": null
},
{
"start": 600,
"end": 631,
"text": "(Sakaguchi and Van Durme, 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In NLP, most other research on learning annotator bias and reliability has been on categorical data (Snow et al., 2008; Carpenter, 2008; Hovy et al., 2013; Passonneau and Carpenter, 2014) .",
"cite_spans": [
{
"start": 100,
"end": 119,
"text": "(Snow et al., 2008;",
"ref_id": "BIBREF18"
},
{
"start": 120,
"end": 136,
"text": "Carpenter, 2008;",
"ref_id": "BIBREF7"
},
{
"start": 137,
"end": 155,
"text": "Hovy et al., 2013;",
"ref_id": "BIBREF11"
},
{
"start": 156,
"end": 187,
"text": "Passonneau and Carpenter, 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "To measure adequacy, in DA, annotators are asked to rate how adequately an MT output expresses the meaning of a reference translation using a continuous slider, which maps to an underlying scale of 0-100. These annotations are crowdsourced using Amazon Mechanical Turk, where \"workers\" complete \"Human Intelligence Tasks\" (HITs) in the form of one or more micro-tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Assessment",
"sec_num": "3"
},
{
"text": "Each HIT consists of 70 MT system translations, along with an additional 30 control items:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Assessment",
"sec_num": "3"
},
{
"text": "1. degraded versions of 10 of these translations; 2. 10 reference translations by a human expert, corresponding to 10 system translations; and 3. repeats of another 10 translations. The scores on the quality control items are used to filter out workers who either click randomly or on the same score continuously. A conscientious worker would give a near perfect score to reference translations, give a lower score to degraded translations when compared to the corresponding MT system translation, and be consistent with scores for repeat translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Assessment",
"sec_num": "3"
},
{
"text": "The paired Wilcoxon rank-sum test is used to test whether the worker scored degraded translations worse than the corresponding system translation. The (arbitrary but customary) cutoff of p < 0.05 is used to determine good workers. The paired Wilcoxon rank-sum test (p < 0.05) is used to test whether the worker scored degraded translations worse than the corresponding system translation. The remaining workers are further tested to check that there is no significant difference between their scores for repeat-pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Assessment",
"sec_num": "3"
},
{
"text": "Worker scores are manually examined to filter out workers who obviously gave the same score to all translations, or scored translations at random. Only these workers are rejected payment. Thus, other workers who do not pass the quality control check are paid for their efforts, but their scores are unused, increasing the overall cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Assessment",
"sec_num": "3"
},
{
"text": "Some workers might have high standards and give consistently low scores for all translations, while others are more lenient. And some workers may only use the central part of the scale. Standardising individual workers' scores makes them more comparable, and reduces noise before calculating the mean.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Assessment",
"sec_num": "3"
},
{
"text": "The final score of an MT system is the mean standardised score of its translations after discarding scores that do not meet quality control criteria. The noise in worker scores is cancelled out when a large number of translations are averaged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Assessment",
"sec_num": "3"
},
{
"text": "To obtain accurate scores of individual translations, multiple judgments are collected and averaged. As we increase the number of annotators per translation, there is greater consistency and reliability in the mean score. This was empirically tested by showing that there is high correlation between the mean of two independent sets of judgments, when the sample size is greater than 15 (Graham et al., 2015) .",
"cite_spans": [
{
"start": 387,
"end": 408,
"text": "(Graham et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Assessment",
"sec_num": "3"
},
{
"text": "However, both these tests are based on a sample-size of 10 items, and, as such, the first test has low power; we show that it filters out a large proportion of the total workers. One solution would be to increase the sample size of the degraded-reference-pairs, but this would be at the expense of the number of useful worker annotations. It is better to come up with a model that would use the scores of all workers, and is more robust to low quality scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Assessment",
"sec_num": "3"
},
{
"text": "Automatic metrics such as BLEU (Papineni et al., 2002) are generally evaluated using the Pear- son correlation with the mean standardised score of the good workers. We similarly evaluate a worker's accuracy using the Pearson correlation of the worker's scores with this ground truth. Over all the data collected for WMT16, the group of good workers are, on average, more accurate than the group of workers who failed the significance test. However, as seen in Figure 1a , there is substantial overlap in the accuracies of the two groups. We can see that very few inaccurate workers were included. However, about a third of the total workers whose scores have a correlation greater than 0.6 were not approved. In particular, over the Tr-En Dataset, the significance test was not very effective, as seen in Figure 1b .",
"cite_spans": [
{
"start": 31,
"end": 54,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 460,
"end": 469,
"text": "Figure 1a",
"ref_id": null
},
{
"start": 805,
"end": 814,
"text": "Figure 1b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Direct Assessment",
"sec_num": "3"
},
{
"text": "Workers whose scores pass the quality control check are given equal weight, despite the variation in their reliability. Given that quality control is not always reliable (as with the Tr-En dataset, e.g.), this could include worker with scores as low as r = 0.2 correlation with the ground truth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Assessment",
"sec_num": "3"
},
{
"text": "While worker standardisation succeeds in increasing inter-annotator consistency, this process discards information about the absolute quality of the translations in the evaluation set. When using the mean of standardised scores, we cannot compare MT systems across independent evalua-tions. In the evaluation of the WMT 17 Neural MT Training Task, the baseline system trained on 4GB GPU memory was evaluated separately from the baseline trained on 8 GB GPU memory and the other submissions. In this setup of manual evaluation, Baseline-4GB scores slightly higher than Baseline-8GB when using raw scores, which is possibly due to chance. However, it scores significantly higher when using standardised scores, which goes against our expectations (Bojar et al., 2017b) .",
"cite_spans": [
{
"start": 745,
"end": 766,
"text": "(Bojar et al., 2017b)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Assessment",
"sec_num": "3"
},
{
"text": "We use a simple model, assuming that a worker score is normally distributed around the true quality of the translation. Each worker has a precision parameter \u03c4 that models their accuracy: workers with high \u03c4 are more accurate. In addition, we include a worker-specific offset \u03b2, which models their deviation from the true score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "For each translation i \u2208 T , we draw the true quality \u00b5 from the standard normal distribution. 1 Then for each worker j \u2208 W , we draw their accuracy \u03c4 j from a gamma distribution with shape parameter k and rate parameter \u03b8. 2 The offset \u03b2 j is again drawn from the standard normal distribution. The worker's score r ij is drawn from a normal distribution, with mean \u00b5 i + \u03b2 j , and precision \u03c4 j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "r ij = N \u00b5 i + \u03b2 j , \u03c4 \u22121 j (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "To help the model, we add constraints on the quality control items: the true quality of the degraded translation is lower than the quality of the corresponding system translation. In addition, the true quality of the repeat items should be approximately equal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "We expect that the model will learn a high \u03c4 for good quality workers, and give their scores higher weight when estimating the mean. We believe that the additional constraints will help the model to infer the worker precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "DA can be viewed as the Maximum Likelihood Estimate of this model, with the following substitutions in Equation (1): s ij is the standardised score of worker j, \u03b2 j is 0 for all workers, and \u03c4 is Figure 2 : The proposed model, where worker j \u2208 W has offset \u03b2 j and precision \u03c4 j , translation i \u2208 T has quality \u00b5 i , and worker j scores translation i with r ij .",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "\u00b5 i r ij \u03b2 j \u03c4 j T W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "constant for all workers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "s ij = N \u00b5 i , \u03c4 \u22121 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "The choice of a Gaussian distribution to model worker scores is technically deficient as a Gaussian is unbounded, but it is still a reasonable approximation. This could be remedied, for example, by using a truncated Gaussian distribution, which we leave to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "We want to maximise the likelihood of the observed judgments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "P (r) = W j=1 P (\u03b2 j )P (\u03c4 j ) T i=1 P (\u00b5 i ) P (r i,j |\u00b5 i , \u03b2, \u03c4 ) d\u03b2 d\u03c4 d\u00b5 = W j=1 N (\u03b2 j |0, 1) \u0393 (\u03c4 j |k, \u03b8) T i=1 N (\u00b5 i |0, 1) N r ij |\u00b5 i , \u03c4 \u22121 d\u03b2 d\u03c4 d\u00b5 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "We use the Expectation Propagation algorithm (Minka, 2001) to infer posteriors over \u00b5 and worker parameters \u03b2 and \u03c4 . 3 Expectation Propagation is a technique for approximating distributions which can be written as a product of factors. It iteratively refines each factor by minimising the KL divergence from the approximate to the true distribution.",
"cite_spans": [
{
"start": 45,
"end": 58,
"text": "(Minka, 2001)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "We evaluate our models on data from the segmentlevel WMT 16 dataset (Bojar et al., 2016b) . We choose the Turkish to English (Tr-En) dataset, which consists of 256 workers, of which about Figure 3 : Pearson's r of the estimated true score with the \"ground truth\" as we increase the number of workers per translation. two thirds (67.58%) fail the quality control measures. It consists of 560 translations, with at least 15 \"good\" annotations for each of these translations (see Figure 1b) .",
"cite_spans": [
{
"start": 68,
"end": 89,
"text": "(Bojar et al., 2016b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 188,
"end": 196,
"text": "Figure 3",
"ref_id": null
},
{
"start": 477,
"end": 487,
"text": "Figure 1b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We use the mean of 15 good standardised annotations as a proxy for the gold standard when evaluating efficiency, and starting from one worker, increase the number of workers to the maximum available. Figure 3 shows that our models are consistently more accurate than the mean of the standardised scores. Figure 4 shows the learned precision and offset for 5 annotators per translation, against the precision and offset of worker scores calculated with respect to the \"ground truth\". This shows that the model is learning worker parameters even when the number of workers is very small, and is using this information to get a better estimate of the mean (the model obtains r = 0.72, compared to r = 0.65 for the mean z-score).",
"cite_spans": [],
"ref_spans": [
{
"start": 200,
"end": 208,
"text": "Figure 3",
"ref_id": null
},
{
"start": 304,
"end": 312,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "On further examination of the outlier in Figure 4a, we find that this worker is pathologically bad. They give a 0 score for all the translations in one HIT, and mostly 100s to the other half. This behaviour is not captured by our model.",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 47,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We showed that significance tests over a small set of quality control items are ineffective at identifying good and bad workers, and propose a model that does not depend on this step. Instead, it uses constraints on the quality control items to learn worker precision, and returns a more reliable estimate of the mean using fewer worker scores per translation. This model does not tell us when to stop collecting judgments. It would be useful to know to have a method to determine when to stop Figure 4 : Scatter plot of worker precision/offset inferred by the model with only 5 workers per translation, against the precision/offset of the deltas of the worker score and the \"ground truth\". collecting annotations based on scores received, instead of relying on a number obtained from onetime experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 494,
"end": 502,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "More importantly, we need to have ways to calibrate worker scores to ensure consistent evaluations across years, so we can measure progress in MT over time. Even if a better model is found to calibrate workers, this does not ensure consistency in judgments, and we believe the HIT structure needs to be changed. We propose to replace the 30 quality control items with items of reliably known quality from the previous year. The correlation between the worker scores and the known scores can be used to assess the reliability of the worker. Moreover, we can scale the worker scores based on these known items, to ensure consistent scores over years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "We first standardise scores (across all workers together) in the dataset2 We use k = 2 and \u03b8 = 1 based on manual inspection of the distribution of worker precisions on a development dataset (WMT18 Cs-En)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the Infer.NET(Minka et al., 2018) framework to implement our models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their valuable feedback and suggestions. This work was supported in part by the Australian Research Council.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Conference on Machine Translation (WMT17)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference on Machine Translation (WMT17).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Shared Task Papers",
"authors": [],
"year": null,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "169--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers. Copenhagen, Denmark, pages 169-214. http://www.aclweb.org/anthology/W17-4717.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Karin Verspoor, and Marcos Zampieri. 2016a. Findings of the 2016 Conference on Machine Translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"Jimeno"
],
"last": "Yepes",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the First Conference on Machine Translation. Berlin, Germany",
"volume": "",
"issue": "",
"pages": "131--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aure- lie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Spe- cia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016a. Findings of the 2016 Conference on Machine Translation. In Proceedings of the First Conference on Machine Translation. Berlin, Ger- many, pages 131-198.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Amir Kamran, and Milo\u0161 Stanojevi\u0107. 2016b. Results of the WMT16",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Yvette Graham, Amir Kamran, and Milo\u0161 Stanojevi\u0107. 2016b. Results of the WMT16",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Proceedings of the First Conference on Machine Translation. Berlin, Germany",
"authors": [
{
"first": "",
"middle": [],
"last": "Metrics Shared Task",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "199--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Metrics Shared Task. In Proceedings of the First Conference on Machine Translation. Berlin, Ger- many, pages 199-231.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Results of the WMT17 Neural MT Training task",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Helcl",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Libovick\u00fd",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Musil",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "525--533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Jind\u0159ich Helcl, Tom Kocmi, Jind\u0159ich Libovick\u00fd, and Tom\u00e1\u0161 Musil. 2017b. Results of the WMT17 Neural MT Training task. In Proceedings of the Second Conference on Ma- chine Translation, Volume 2: Shared Task Pa- pers. Copenhagen, Denmark, pages 525-533. http://www.aclweb.org/anthology/W17-4757.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multilevel Bayesian models of categorical data annotation",
"authors": [
{
"first": "Bob",
"middle": [],
"last": "Carpenter",
"suffix": ""
}
],
"year": 2008,
"venue": "Aliasi",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bob Carpenter. 2008. Multilevel Bayesian models of categorical data annotation. Technical report, Alias- i.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Can machine translation systems be evaluated by the crowd alone",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Alistair",
"middle": [],
"last": "Moffat",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Zobel",
"suffix": ""
}
],
"year": 2017,
"venue": "Natural Language Engineering",
"volume": "23",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2017. Can machine translation sys- tems be evaluated by the crowd alone. Natural Lan- guage Engineering 23(1):330.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Accurate evaluation of segment-level machine translation metrics",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Nitika",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1183--1191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Nitika Mathur, and Timothy Baldwin. 2015. Accurate evaluation of segment-level ma- chine translation metrics. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics Hu- man Language Technologies. Denver, USA, pages 1183-1191.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Models of translation competitions",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013)",
"volume": "",
"issue": "",
"pages": "1416--1424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hopkins and Jonathan May. 2013. Models of translation competitions. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics (ACL 2013). Sofia, Bulgaria, pages 1416-1424.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning whom to trust with MACE",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2013)",
"volume": "",
"issue": "",
"pages": "1120--1130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard H Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies (NAACL HLT 2013). Atlanta, USA, pages 1120-1130.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Infer.NET 0.3. Microsoft Research Cambridge",
"authors": [
{
"first": "T",
"middle": [],
"last": "Minka",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Winn",
"suffix": ""
},
{
"first": "J",
"middle": [
"P"
],
"last": "Guiver",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zaykov",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Fabian",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bronskill",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Minka, J.M. Winn, J.P. Guiver, Y. Zaykov, D. Fabian, and J. Bronskill. 2018. Infer.NET 0.3. Mi- crosoft Research Cambridge. http://dotnet. github.io/infer.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Expectation propagation for approximate Bayesian inference",
"authors": [
{
"first": "Thomas",
"middle": [
"P"
],
"last": "Minka",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "362--369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas P. Minka. 2001. Expectation propagation for approximate Bayesian inference. In Proceedings of the Seventeenth Conference on Uncertainty in Arti- ficial Intelligence. Seattle, USA, pages 362-369.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bleu: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics (ACL 2002). Philadelphia, USA, pages 311-318.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The benefits of a model of annotation",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Passonneau",
"suffix": ""
},
{
"first": "Bob",
"middle": [],
"last": "Carpenter",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "2",
"issue": "1",
"pages": "311--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Rebecca Passonneau and Bob Carpenter. 2014. The benefits of a model of annotation. Transactions of the Association of Computational Linguistics 2(1):311-326.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Efficient elicitation of annotations for human evaluation of machine translation",
"authors": [
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keisuke Sakaguchi, Matt Post, and Benjamin Van Durme. 2014. Efficient elicitation of annota- tions for human evaluation of machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation. Baltimore, USA, pages 1-11.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Efficient online scalar annotation with bounded support",
"authors": [
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "208--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keisuke Sakaguchi and Benjamin Van Durme. 2018. Efficient online scalar annotation with bounded sup- port. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers). pages 208-218. http://aclweb.org/anthology/P18-1020.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Cheap and fast -but is it good? evaluating non-expert annotations for natural language tasks",
"authors": [
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "254--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast -but is it good? evaluating non-expert annotations for natural language tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing. Honolulu, USA, pages 254-263.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "(a) all language pairs (b) Tr-En language pair Figure 1: Accuracy of \"good\" vs \"bad\" workers in the WMT 2016 dataset."
}
}
}
}