ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2021.eval4nlp-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:37.967758Z"
},
"title": "The Eval4NLP Shared Task on Explainable Quality Estimation: Overview and Results",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Piyawat",
"middle": [],
"last": "Lertvittayakumjorn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UK \u2021 TU",
"location": {
"settlement": "Darmstadt",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Royal",
"middle": [],
"last": "Holloway",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we introduce the Eval4NLP-2021 shared task on explainable quality estimation. Given a source-translation pair, this shared task requires not only to provide a sentencelevel score indicating the overall quality of the translation, but also to explain this score by identifying the words that negatively impact translation quality. We present the data, annotation guidelines and evaluation setup of the shared task, describe the six participating systems, and analyze the results. To the best of our knowledge, this is the first shared task on explainable NLP evaluation metrics. Datasets and results are available at https://github. com/eval4nlp/SharedTask2021.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we introduce the Eval4NLP-2021 shared task on explainable quality estimation. Given a source-translation pair, this shared task requires not only to provide a sentencelevel score indicating the overall quality of the translation, but also to explain this score by identifying the words that negatively impact translation quality. We present the data, annotation guidelines and evaluation setup of the shared task, describe the six participating systems, and analyze the results. To the best of our knowledge, this is the first shared task on explainable NLP evaluation metrics. Datasets and results are available at https://github. com/eval4nlp/SharedTask2021.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent Natural Language Processing (NLP) systems based on pre-trained representations from Transformer language models, such as BERT (Devlin et al., 2019) and XLM-Roberta (Conneau et al., 2020) , have achieved outstanding results in a variety of tasks. This boost in performance, however, comes at the cost of efficiency and interpretability. Interpretability is a major concern in modern Artificial Intelligence (AI) and NLP research (Doshi-Velez and Kim, 2017; Danilevsky et al., 2020) , as black-box models undermine users' trust in new technologies (Mercado et al., 2016; Toreini et al., 2020) .",
"cite_spans": [
{
"start": 133,
"end": 154,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 171,
"end": 193,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 435,
"end": 462,
"text": "(Doshi-Velez and Kim, 2017;",
"ref_id": "BIBREF11"
},
{
"start": 463,
"end": 487,
"text": "Danilevsky et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 553,
"end": 575,
"text": "(Mercado et al., 2016;",
"ref_id": "BIBREF32"
},
{
"start": 576,
"end": 597,
"text": "Toreini et al., 2020)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the Eval4NLP 2021 shared task, we focus on evaluating machine translation (MT) as an example of this problem. Specifically, we look at the task of quality estimation (QE), where the aim is to predict the quality of MT output at inference time without access to reference translations (Blatz et al., 2004; Specia et al., 2018b) . 1 Translation quality can be assessed at different levels of granularity: sentencelevel, i.e. predicting the overall quality of translated sentences, and word-level, i.e. highlighting specific errors in the MT output. Those have traditionally been treated as two separate tasks, each one requiring dedicated training data.",
"cite_spans": [
{
"start": 287,
"end": 307,
"text": "(Blatz et al., 2004;",
"ref_id": "BIBREF2"
},
{
"start": 308,
"end": 329,
"text": "Specia et al., 2018b)",
"ref_id": "BIBREF48"
},
{
"start": 332,
"end": 333,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this shared task, we propose to address wordlevel translation error identification as an explainability task. 2 Explainability is a broad area aimed at explaining predictions of machine learning models. Rationale extraction methods achieve this by selecting a portion of the input that justifies model output for a given data point (Lei et al., 2016; . A natural way to explain sentencelevel quality assessment is to identify translation errors. Hence, we frame error identification as a task of providing explanations for the predictions of sentence-level QE models. We claim that this task represents a challenging new benchmark for testing explainability for NLP and provides a new way of addressing word-level QE.",
"cite_spans": [
{
"start": 335,
"end": 353,
"text": "(Lei et al., 2016;",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the one hand, QE is different from other explainable NLP tasks with existing datasets (DeYoung et al., 2020) in various important aspects. First, it is a regression task, as opposed to binary or multiclass text classification explored in previous work. Second, it is a multilingual task where the output score captures the relationship between source and target sentences. Finally, QE is fundamentally different from e.g. text classification, where clues are typically separate words or phrases (Zaidan et al., 2007) that can often be considered refers to unsupervised cross-lingual metrics that assess MT quality by computing distances between cross-lingual semantic representations of the source and target sentences Song et al., 2021) .",
"cite_spans": [
{
"start": 89,
"end": 111,
"text": "(DeYoung et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 498,
"end": 519,
"text": "(Zaidan et al., 2007)",
"ref_id": "BIBREF57"
},
{
"start": 722,
"end": 740,
"text": "Song et al., 2021)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 A study on global explainability of MT evaluation metrics, disentangling them along linguistic factors such as syntax and semantics, has recently been conducted in Kaster et al. (2021) . In contrast, our shared task addresses local explainability of individual input instances. independently of the rest of the text. By contrast, translation errors can only be identified given the context of the source and target sentences. Thus, this shared task provides a new benchmark for testing explainability methods in NLP.",
"cite_spans": [
{
"start": 166,
"end": 186,
"text": "Kaster et al. (2021)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, treating word-level QE as an explainability problem offers some advantages compared to the current approaches. First, we can potentially avoid the need for supervised data at word level. Second, gold standard test sets can be made less expensive and more reliable. As we will show in Section 2, rationalized sentence-level evaluation can be a middle ground between relatively cheap but noisy annotations derived from post-editing (Fomicheva et al., 2020) and very informative but expensive explicit error annotation based on error taxonomies, such as the Multidimensional Quality Metrics (MQM) framework (Lommel et al., 2014b) . For this shared task, we build a new test set with manually annotated explanations for sentence-level quality ratings. To the best of our knowledge, this is the first MT evaluation dataset annotated with human rationales.",
"cite_spans": [
{
"start": 449,
"end": 473,
"text": "(Fomicheva et al., 2020)",
"ref_id": "BIBREF46"
},
{
"start": 623,
"end": 645,
"text": "(Lommel et al., 2014b)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main objective of the shared task is threefold. First, it aims to explore the plausibility of explainable evaluation metrics (Wiegreffe and Pinter, 2019) , by proposing a test set with manually annotated rationales. It helps the community better understand how similar the generated explanations are to the human explanations. Second, the shared task encourages research on unsupervised or semisupervised methods for error identification, so as to reduce the cost on word-level MT error annotation. Last but not least, the shared task sheds light on how current NLP evaluation systems arrive at their predictions and to what extent this process is aligned with human reasoning.",
"cite_spans": [
{
"start": 129,
"end": 157,
"text": "(Wiegreffe and Pinter, 2019)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For this shared task, we collected a new test set with (i) manual assessment of translation quality at sentence level and (ii) word-level rationales that explain the sentence-level scores (Section 2.1). For training and development purposes, the participants were advised to use existing resources, which are briefly discussed in Section 2.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "Language pairs and MT systems The test set contains four language pairs: Estonian-English (Et-En), Romanian-English (Ro-En), Russian-German (Ru-De) and German-Chinese (De-Zh). For Et-En and Ro-En, we use the source and translated sentences from the test21 partition of the MLQE-PE dataset (Fomicheva et al., 2020) . For Ru-De, the source sentences were extracted from Wikipedia following the procedure described in Guzm\u00e1n et al. (2019) and translated using the ML50 fairseq multilingual Transformer model (Tang et al., 2020) . For De-Zh, the translations were produced using the Google Translate API, as the MT quality of the ML50 model was too low for this language pair according to our preliminary experiments. Sentence-and word-level annotation For this annotation effort, we adapted the Appraise manual evaluation interface (Federmann, 2012) . For sentence-level annotation, we follow the guidelines from the MLQE-PE dataset (Fomicheva et al., 2020) , a variant of the so called direct assessment (DA) scores proposed by Graham et al. (2016) . As illustrated in Figure 2 , the annotators were asked to provide a sentence rating by moving a slider on the quality scale from left (worse) to right (best). They were additionally provided with instructions on what specific quality ranges represent. Following Graham et al. (2016) , the numeric values were not visible to the annotators, but the scale is interpreted numerically as follows: 1-10 range represents a completely incorrect translation; 11-30, a translation that contains a few correct keywords, but the overall meaning is different or lost; 31-50, a translation that preserves parts of the original meaning; 51-70, a translation which is understandable and conveys the overall meaning of the source but contains a few errors; 71-90, a translation that closely preserves the semantics of the source and has only minor mistakes; and 91-100, a perfect translation. Crucially, besides the sentence-level rating, the annotators were asked to provide a rationale for their decisions. Specifically, for all translations except those they considered perfect, the annotators were required to highlight the words in the MT sentence corresponding to translation errors that would explain the assigned sentence score. 3 They were also asked to highlight the 3 For all languages except for Chinese, the source sen-source words that caused the errors in the MT output, as shown in Figure 2 . The missing contents was annotated by highlighting the source words that were not translated, whereas for the added (hallucinated) contents the annotators were only required to highlight the corresponding target words. We interpreted the highlighting as binary labels, indicating whether a given word is part of the rationale (positive class), or not (negative class). The annotators were provided with detailed annotation guidelines, which are available at https: //github.com/eval4nlp/SharedTask2021/ tree/main/annotation-guidelines.",
"cite_spans": [
{
"start": 289,
"end": 313,
"text": "(Fomicheva et al., 2020)",
"ref_id": "BIBREF46"
},
{
"start": 415,
"end": 435,
"text": "Guzm\u00e1n et al. (2019)",
"ref_id": "BIBREF20"
},
{
"start": 505,
"end": 524,
"text": "(Tang et al., 2020)",
"ref_id": null
},
{
"start": 829,
"end": 846,
"text": "(Federmann, 2012)",
"ref_id": "BIBREF13"
},
{
"start": 930,
"end": 954,
"text": "(Fomicheva et al., 2020)",
"ref_id": "BIBREF46"
},
{
"start": 1026,
"end": 1046,
"text": "Graham et al. (2016)",
"ref_id": "BIBREF19"
},
{
"start": 1311,
"end": 1331,
"text": "Graham et al. (2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1067,
"end": 1075,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 2431,
"end": 2439,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Eval4NLP Test Set",
"sec_num": "2.1"
},
{
"text": "The annotation was conducted by 3 annotators for Et-En and Ro-En, and by (up to) 4 annotators for Ru-De and De-Zh. 4 Et-En and Ro-En data was annotated by Estonian and Romanian native speakers with near native proficiency in English. De-Zh data was annotated by Chinese native speakers with strong proficiency in German. Finally, Ru-De data was annotated by native speakers of Russian with near native proficiency in German. The annotators for Ro-En, Et-En, and Ru-De are students specializing in Linguistics and Translation or are professional translators; the annotators for De-Zh are students specializing in computer science. The cost of annotation was approximately 4,000 Euro, with working times of 15 to 25 hours per annotator for De-Zh and Ru-De (Et-En and Ro-En annotators were compensated for the whole work, instead of on an hourly basis, and not all of them noted down their working times).",
"cite_spans": [
{
"start": 115,
"end": 116,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Eval4NLP Test Set",
"sec_num": "2.1"
},
{
"text": "To produce a single sentence-level score, we take an average across the scores from individual annotators. To obtain a single binary label for each token, we use a majority voting mechanism, where the token is considered as part of the rationale if it was highlighted by the majority of the annotators. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Eval4NLP Test Set",
"sec_num": "2.1"
},
{
"text": "Inter-annotator agreement Table 2 shows average agreement levels between our annotators (on common sets of annotated data instances). We use Pearson correlation for sentence-level scores and tences and MT outputs were tokenized with Moses tokenizer available at https://github.com/moses-smt/ mosesdecoder. For Chinese, the jieba tokenizer was used: https://github.com/fxsjy/jieba. Cohen's kappa coefficient for word-level annotations. To be more precise, we measure Pearson correlation among all common instances between two annotators and then report the average across annotators; we measure average kappa agreement (averaged over all sentences) between any two annotators and then report the average across all annotators. We observe that Ro-En and Ru-De are most consistently annotated and De-Zh and Et-En have least agreement on average. Overall, our agreements are acceptable, however, in all cases, ranging from 0.42 to 0.67 kappa on word-level and \u223c0.6 to 0.8 Pearson on sentence-level. For comparison, the average kappa reported by Lommel et al. (2014a) for the fine-grained MQM error annotation ranges from 0.25 to 0.34. Data statistics The number of annotated sentences, as well as the number of the source and target tokens in the test set are shown in Table 1 . In addition, we show the number of sentences with lower-than-perfect translation quality. This is the final subset of sentences that was used to evaluate the submissions to the shared task, since in our manual evaluation setup no rationales were required for the MT outputs with perfect quality. As shown in Table 1, for all the language pairs the vast majority of translations has a lower-than-perfect score, where the percentage of such sentences is the lowest for De-Zh (65%) and the highest for Ru-De (90%). Figure 1 shows the distribution of sentence-level scores for each language pair. The language pair with the highest average quality is De-Zh, whereas Ru-De has the lowest average score. For Et-En, Ro-En and Ru-De, the scores cover the whole quality range, while the distribution for De-Zh is highly skewed, which makes the task more challenging for this language pair (see Section 6). Table 4 shows the proportion of words annotated as rationales. The numbers in Table 4 are consistent with the average sentence-level quality, as De-Zh and Ru-De have the lowest and the highest percentage, respectively. This is expected given that lower quality translations should contain a higher number of errors. In general, the proportion of tokens considered relevant for explaining sentence-level ratings is fairly low. This is consistent with the annotation guidelines which stipulate that all and only the words necessary to justify the sentence score must be highlighted. Finally, we observe that, for Et-En and Ro-En, the percentage of annotated tokens is higher for the target than for the source sentences. This can be related to the presence of hallucinations, where the target contains words that do not have a clear correspondence with any part",
"cite_spans": [
{
"start": 1041,
"end": 1062,
"text": "Lommel et al. (2014a)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1265,
"end": 1272,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1787,
"end": 1795,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 2172,
"end": 2179,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 2250,
"end": 2257,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Eval4NLP Test Set",
"sec_num": "2.1"
},
{
"text": "Pe 20 august , trupele s\u00e2rbe au\u00eenceput urm\u0203rirea austriecilor\u00een retragere . PE On 20 August , the Serbian troops began pursuing the retreating Austrians .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Src",
"sec_num": null
},
{
"text": "Serbian troops started pursuing Austria on 20 August in withdrawal .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT",
"sec_num": null
},
{
"text": "Ann-EXPL Serbian troops started pursuing Austria on 20 August in withdrawal .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT",
"sec_num": null
},
{
"text": "Ann-EXPL* Serbian troops started pursuing Austria on 20 August in withdrawal .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT",
"sec_num": null
},
{
"text": "Serbian troops started pursuing Austria on 20 August in withdrawal .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictions",
"sec_num": null
},
{
"text": "Ann-PE Serbian troops started pursuing Austria on 20 August in withdrawal .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictions",
"sec_num": null
},
{
"text": "Color scale 0.0 0.5 1.0 Table 3 : Example of the target-side annotation from the Ro-En test set and the output expected from the participants. \"Src\" stands for the source sentence, \"MT\" is the MT output, \"PE\" is the post-edited version of the MT output taken from the MLQE-PE dataset. \"Ann-EXPL*\" is the mean of the binary scores for each word averaged across the annotators. \"Ann-EXPL\" corresponds to the binary scores obtained by aggregating individual annotations through majority voting (official gold standard of the shared task). \"Ann-PE\" is the word-level annotation derived from post-editing. \"Predictions\" contains the predictions (after min-max normalization) for this sentence from the IST-Unbabel submission to the constrained track. of the source sentence, as well as to typological differences between languages, whereby there tends to be a one-to-many correspondence between the source and target words.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Predictions",
"sec_num": null
},
{
"text": "Difference to existing QE datasets with wordlevel annotation The test set collected for this shared task is different from existing QE datasets with word-level annotation. A popular approach to building QE datasets is based on measuring post-editing effort (Bojar et al., 2017; Specia et al., 2018a; Fonseca et al., 2019; Specia et al., 2020) . This can be done at sentence level, by computing the so called HTER score (Snover et al., 2006) that represents the minimum number of edits a human language expert is required to make in order to correct the MT output; or at word level, by aligning the MT output to its post-edited version and annotating the misaligned source and target words. An important limitation of this strategy is that the annotated words do not necessarily correspond to translation errors, as correcting a specific error may involve changing multiple related words in the sen-tence. This is exacerbated by the limitations of the heuristics used to automatically align the MT and its post-edited version. Indeed, as shown in Table 4 , the percentage of error tokens on the same data for Ro-En and Et-En language pairs is considerably higher in the MLQE-PE dataset, where word-level annotation is derived from post-editing. An alternative approach is the explicit annotation of translation errors by human experts. This is typically done based on fine-grained error taxonomies such as the Multidimensional Quality Metrics (MQM) framework (Lommel et al., 2014b) . While such annotations provide very informative labelled data, the annotator agreement for this style of annotation is fairly low (Lommel et al., 2014a) and the annotation is very time-consuming. 6 The example in Table 3 shows a sample of the annotated data from the Ro-En test set. The first three rows correspond to the source (Src), the MT output (MT), and the post-edited MT output (PE). \"Ann-EXPL*\" shows the mean of the binary scores assigned by each annotator to a given word. Thus, the words \"20\" and \"August\" were included in the rationale by 1 out of 3 annotators for this example, the words \"Austria\" and \"in\" were highlighted by 2 out of 3 annotators; finally, the word \"withdrawal\" was included in the rationale by all 3 annotators. We can interpret this information as an indirect indication of error severity, as the most serious errors are expected to be noted by all of the annotators. \"Ann-EXPL\" shows the binary scores that we obtain through a majority voting mechanism, as described above. These binary scores were used for the official evaluation reported in Section 6. \"Predictions\" illustrates the predicted scores from one of the participants of the shared task. 7 The predictions almost perfectly correspond to the human rationale, as in both cases the words \"Austria\" and \"withdrawal\" receive the highest scores. Finally, for comparison, \"Ann-PE\" shows the word labels for this sentence taken from the MLQE-PE dataset. In this case all tokens are considered as errors since re-orderings (or \"shifts\") are not included in the set of possible edit operations used to compute minimum edit distance, from which the alignment between MT output and its PE is derived.",
"cite_spans": [
{
"start": 257,
"end": 277,
"text": "(Bojar et al., 2017;",
"ref_id": null
},
{
"start": 278,
"end": 299,
"text": "Specia et al., 2018a;",
"ref_id": "BIBREF47"
},
{
"start": 300,
"end": 321,
"text": "Fonseca et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 322,
"end": 342,
"text": "Specia et al., 2020)",
"ref_id": "BIBREF46"
},
{
"start": 419,
"end": 440,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF44"
},
{
"start": 1459,
"end": 1481,
"text": "(Lommel et al., 2014b)",
"ref_id": "BIBREF29"
},
{
"start": 1614,
"end": 1636,
"text": "(Lommel et al., 2014a)",
"ref_id": "BIBREF28"
},
{
"start": 1680,
"end": 1681,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1046,
"end": 1054,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 1697,
"end": 1704,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Predictions",
"sec_num": null
},
{
"text": "To the best of our knowledge, this test set is the first MT evaluation dataset annotated with human rationales. The proposed annotation scheme has certain advantages for the QE task, as it allows to explicitly annotate translation errors, and at the same time results in higher agreement and less effort than fine-grained error annotation. 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictions",
"sec_num": null
},
{
"text": "As discussed above, we use the same sentence-level annotation scheme as the one used in the MLQE-PE dataset. Therefore, for Ro-En and Et-En the participants could use the train and development partitions of MLQE-PE to build their sentencelevel models. The De-Zh and Ru-De language pairs represent a fully zero-shot scenario where no sentence-level training data is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and development data",
"sec_num": "2.2"
},
{
"text": "The task consisted of building a QE system that (i) predicts the quality score for an input pair of source text and MT hypothesis, (ii) provides word-level evidence for its predictions. An example of the test data used for evaluation is shown in Table 3 . The participants were expected to provide explanations for each sentence pair in the form of continuous scores, with the highest scores corresponding to the tokens considered as relevant by human annotators. The participants could submit to either constrained or unconstrained track. For the constrained track, the participants were expected to use no supervision at word level, while in the unconstrained track they were allowed to use any word-level data for training.",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 253,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task and Evaluation",
"sec_num": "3"
},
{
"text": "Explanations can be obtained either by building inherently interpretable models (Yu et al., 2019) or by using post-hoc explanation methods which extract explanations from an existing model (Ribeiro et al., 2016; Lundberg and Lee, 2017; Sundararajan et al., 2017a; Schulz et al., 2020) , for example by analysing the values of the gradient on each input feature. In this shared task, we provide both sentence-level training data and strong sentencelevel models (see the TransQuest-LIME baseline in Section 4), and thus encourage the participants to either train their own inherently interpretable models or use post-hoc techniques on top of our existing sentence-level models.",
"cite_spans": [
{
"start": 80,
"end": 97,
"text": "(Yu et al., 2019)",
"ref_id": "BIBREF56"
},
{
"start": 189,
"end": 211,
"text": "(Ribeiro et al., 2016;",
"ref_id": "BIBREF39"
},
{
"start": 212,
"end": 235,
"text": "Lundberg and Lee, 2017;",
"ref_id": "BIBREF30"
},
{
"start": 236,
"end": 263,
"text": "Sundararajan et al., 2017a;",
"ref_id": "BIBREF49"
},
{
"start": 264,
"end": 284,
"text": "Schulz et al., 2020)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Evaluation",
"sec_num": "3"
},
{
"text": "We accommodate the evaluation scheme to be suitable both for approaches that return continuous scores, and for supervised approaches that can return binary scores. Namely, we use evaluation metrics based on class probabilities that have been previously adapted for assessing the plausibility of rationale extraction methods (Atanasova et al., 2020) . Since explainability methods typically proceed on instance-by-instance basis, and the scores produced for different instances are not necessarily comparable, we compute the evaluation metrics for each instance separately and average the results across all instances in the test set. Following Fomicheva et al. (2021), we define the following evaluation metrics to assess the performance of the submissions to the shared task at the word-level: AUC score For each instance, we compute the area under the receiver operating characteristic curve (AUC score) to compare the continuous attribution scores against binary gold labels.",
"cite_spans": [
{
"start": 324,
"end": 348,
"text": "(Atanasova et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Evaluation",
"sec_num": "3"
},
{
"text": "Average Precision AUC scores can be overly optimistic for imbalanced data. Therefore, we also use Average Precision (AP). AP summarizes a precision-recall curve as the weighted mean of precisions achieved at each threshold, with the increase in recall from the previous threshold used as the weight (Zhu, 2004) .",
"cite_spans": [
{
"start": 299,
"end": 310,
"text": "(Zhu, 2004)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Evaluation",
"sec_num": "3"
},
{
"text": "Recall at Top-K In addition, we report the Recall-at-Top-K metric commonly used in information retrieval. Applied to our setting, this metric computes the proportion of words with the highest attribution that correspond to translation errors against the total number of errors in the MT output. The code for computing the evaluation metrics can be found in the shared task github repository:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Evaluation",
"sec_num": "3"
},
{
"text": "https://github.com/eval4nlp/ SharedTask2021/tree/main/scripts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Evaluation",
"sec_num": "3"
},
{
"text": "The shared task used CodaLab as the submission platform.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Evaluation",
"sec_num": "3"
},
{
"text": "Random baseline is built by sampling scores uniformly at random from a continuous [0..1) range for each source and target token in a given sentence pair as well as for the sentence-level QE score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "4"
},
{
"text": "Transquest-LIME uses TransQuest QE models described in Ranasinghe et al. (2020) to produce sentence-level scores. TransQuest follows the current standard practice of building task-specific NLP models by fine-tuning pre-trained multilingual language models, such as XLM-Roberta, on taskspecific data. For Ro-En and Et-En, the Ro-En and Et-En TransQuest models are used, whereas for the zero-shot language pairs we use the multilingual variant of TransQuest, which was trained on a concatenation of MLQE-PE data. The post-hoc LIME explanation method (Ribeiro et al., 2016) is then applied to generate relevance scores for the source and target words. LIME is a simplification-based explanation technique, which fits a linear model in the vicinity of each test instance, to approximate the decision boundary of the complex model. Since in our sentence-level gold standard higher scores mean better quality, we invert LIME explanations so that higher values correspond to errors.",
"cite_spans": [
{
"start": 55,
"end": 79,
"text": "Ranasinghe et al. (2020)",
"ref_id": "BIBREF38"
},
{
"start": 548,
"end": 570,
"text": "(Ribeiro et al., 2016)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "4"
},
{
"text": "XMover-SHAP uses the reference-free metric XMoverScore to rate translations and uses the (likewise post-hoc) SHAP explainer (Lundberg and Lee, 2017) to explain the ratings. In particular, given a source-translation pair, XMoverScore provides a real number to indicate the quality of the translation, in terms of its semantic overlapping with the source sentence, using re-mapped multilingual BERT embeddings and a target-side language model. 9 To explain the contribution of each word in the rating, SHAP creates perturbations of the source/translation sentence by masking out some words and estimates the average marginal contribution of each word across all possible perturbations. The source code for all the baseline systems is available at https://github.com/eval4nlp/ SharedTask2021/tree/main/baselines.",
"cite_spans": [
{
"start": 138,
"end": 148,
"text": "Lee, 2017)",
"ref_id": "BIBREF30"
},
{
"start": 442,
"end": 443,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "4"
},
{
"text": "For this first edition of the shared task, we had a total of 6 participating teams listed in Table 5 . 10 Below, we briefly describe the submitted approaches.",
"cite_spans": [
{
"start": 103,
"end": 105,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Participants",
"sec_num": "5"
},
{
"text": "NICT-Kyoto use synthetic data to fine-tune the XLM-Roberta language model for the QE task. To produce synthetic sentence-level scores, they translate publicly available parallel corpora using SOTA neural MT systems and compute three referencebased metrics: ChrF (Popovi\u0107, 2015) , TER (Snover et al., 2006) and BLEU (Papineni et al., 2002) . To simulate word-level annotation, they derive wordlevel labels from the alignment between the MT outputs and human reference translations. The QE model is then jointly trained to predict the scores from different metrics as well as word-level tags. A metric embedding component is proposed where each metric is represented with a set of learnable parameters. An attention mechanism between the metric embeddings and the input representations is employed to obtain word-level scores as explanations for the sentence-level predictions.",
"cite_spans": [
{
"start": 262,
"end": 277,
"text": "(Popovi\u0107, 2015)",
"ref_id": "BIBREF37"
},
{
"start": 284,
"end": 305,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF44"
},
{
"start": 315,
"end": 338,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Participants",
"sec_num": "5"
},
{
"text": "IST-Unbabel participated in the constrained and unconstrained tracks of the shared task. For the constrained track (\"IST-Unbabel\" in Table 6 ), they used a set of explainability methods to extract the relevance of the input tokens from sentencelevel QE models built on top of XLM-Roberta and RemBERT. The explainability methods explored in this work include attention-based, gradient-based and perturbation based approaches, as well as rationalization by construction. The best performing method which was submitted to the competition relies on the attention mechanism of the pretrained Transformers in order to obtain the relevance scores for each token. In addition, scaling attention weights by the L2 norm of value vectors as suggested in Kobayashi et al. (2020) resulted in a further boost in performance.",
"cite_spans": [
{
"start": 743,
"end": 766,
"text": "Kobayashi et al. (2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 133,
"end": 140,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Participants",
"sec_num": "5"
},
{
"text": "For the unconstrained track (\"IST-Unbabel*\" in Table 6 ), they add a word-level loss to the sentencelevel models and train jointly using the annotated data from the MLQE-PE dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Participants",
"sec_num": "5"
},
{
"text": "HeyTUDa use the TransQuest QE models (Ranasinghe et al., 2020) for sentence-level prediction and a set of explainability techniques to estimate the relevance of each source and target word. Specifically, they explore three perturbation-based methods: LIME, SHAP, and occlusion (Zeiler and Fergus, 2014), as well as three gradient-based methods: DeepLift (Shrikumar et al., 2017) , Layer Gradient x Activation (Shrikumar et al., 2016) and Integrated Gradients (Sundararajan et al., 2017b) . They further use an unsupervised ensembling method to combine the different explainability approaches.",
"cite_spans": [
{
"start": 37,
"end": 62,
"text": "(Ranasinghe et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 354,
"end": 378,
"text": "(Shrikumar et al., 2017)",
"ref_id": "BIBREF42"
},
{
"start": 409,
"end": 433,
"text": "(Shrikumar et al., 2016)",
"ref_id": "BIBREF43"
},
{
"start": 459,
"end": 487,
"text": "(Sundararajan et al., 2017b)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Participants",
"sec_num": "5"
},
{
"text": "Gringham use the reference-free metrics XBERTScore (i.e., BERTScore (Zhang et al., 2020) with cross-lingual embeddings) and XMoverScore and make them inherently interpretable by considering the token alignments produced by the models. The intuition is that words that are not well-aligned are most likely erroneous. Specifically, they explore XBERTScore and XMoverScore as sentence-level models and use the corresponding similarity (or distance) matrices to produce token-level scores.",
"cite_spans": [
{
"start": 68,
"end": 88,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Participants",
"sec_num": "5"
},
{
"text": "CLIP-UMD propose an ensemble of two approaches: (1) the LIME explanation technique applied to the TransQuest sentence-level model; (2) Divergent mBERT (Briakou and Carpuat, 2020) , which is a BERT-based model that can detect crosslingual semantic divergences. Divergent mBERT is trained using synthetic data where semantic divergences are introduced automatically following a set of pre-defined perturbations. To produce a combination of the two methods, the predictions from each approach are averaged.",
"cite_spans": [
{
"start": 151,
"end": 178,
"text": "(Briakou and Carpuat, 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Participants",
"sec_num": "5"
},
{
"text": "CUNI-Prague participated in the unconstrained track. They fine-tune the XLM-R model for wordlevel and sentence-level QE. To map sentence piece tokenization from XLM-R to Moses tokenization, they ignore all sentence piece tokens corresponding to a given Moses token except the first one. Table 6 shows the results of the shared task. We report the word-level metrics presented in Section 3, as well as Pearson correlation at sentence level. The values of the \"Rank\" columns are computed by first ranking the participants according to each of the three word-level metrics and then averaging the resulting rankings. 11 First, we note that all of the submissions outperform the three baselines for all the language pairs, 12 which indicates that error detection can indeed be approached as rationale extraction.",
"cite_spans": [],
"ref_spans": [
{
"start": 287,
"end": 294,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Participants",
"sec_num": "5"
},
{
"text": "Approaches Overall, the submitted approaches vary a lot in the way they addressed the task. The following trends can be identified:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "\u2022 Following the recent standard in QE and similar multilingual NLP tasks, all the approaches rely on multilingual Transformer-based language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "\u2022 Submissions to the unconstrained track use the SOTA approach to word-level supervision explored previously by Lee (2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "\u2022 The use of synthetic data produced by aligning MT outputs and reference translations from existing parallel corpora proves an efficient strategy to identify translation errors. Supervising the predictions based on Transformer attention weights with the labels derived from synthetic data was used by the winning submission to the shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "\u2022 The approaches that rely on attention weights to predict human rationales (NICT-Kyoto and IST-Unbabel) achieve the best results for the constrained track. Table 6 : Official results of the Eval4NLP Shared Task on Explainable Quality Estimation. Submissions to the unconstrained track are marked with *. We mark the NICT Kyoto submissions with a \u2020 , as they submitted to the constrained track, but use synthetic data for word-level supervision. Submissions not significantly outperformed by any other submission according to paired t-test for each metric are marked in bold. N/A means that the participating team did not submit the word-level scores for the source sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 164,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "\u2022 Both IST-Unbabel and HeyTUDa explore a wide set of explanation methods. The differences in performance are likely due to the method used for the final submission. While IST-Unbabel submission explores normalized attention weights, HeyTUDa use an ensemble of gradient-based approaches. A possible reason for the inferior performance of Hey-TUDa is that the gradient is computed with respect to the embedding layer. As noted by Fomicheva et al. (2021) , attribution to the embedding layer in the Transformer-based QE models does not provide strong results for the error detection task since word representations at the embedding layer do not capture contextual information, which is crucial for predicting translation quality.",
"cite_spans": [
{
"start": 428,
"end": 451,
"text": "Fomicheva et al. (2021)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "\u2022 Gringham follow an entirely different strategy where they modify an existing referencefree metric to obtain both sentence score and word-level explanations in an unsupervised way. A similar approach is explored in our XMover-SHAP baseline, but the difference is that we apply SHAP explainer on top of XMover, while Gringham makes the XMover-Score inherently interpretable, which leads to better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Winners The overall winner of the competition is the submission to the constrained track from NICT-Kyoto, which wins on 3 out of 4 language pairs, according to the source and target ranking. Fine-tuning on large amounts of synthetic data as well as the use of attention mechanism between the evaluation metric embeddings and the contextualized input representations seem to be the key to their performance. We note, however, that they offer a mixed approach with word-level supervision on synthetic data. Among the constrained approaches that do not use any supervision at word level, the best performing submission is IST-Unbabel, which outperforms other constrained submissions for all language pairs, except Ru-De, where they perform on par with Gringham on the target side and are surpassed by Gringham on the source side. For the unconstrained track we received only two submissions, from which IST-Unbabel* performs the best.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Sentence-level correlation is not predictive of the performance of the submissions at detecting relevant tokens. This is due to the fact that submitted approaches vary in the role played by the sentence-level model. In fact, if we look at the submissions that follow comparable strategies, we do observe a correspondence between sentence-level and token-level results. For example, among the approaches that build upon a sentence-level QE model and use post-hoc methods to explain the predictions, IST-Unbabel tends to achieve higher performance both in terms of the token-level results and in terms of the Pearson correlation with sentence ratings, compared to HeyTUDa and the TransQuest-LIME baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The performance on zero-shot language pairs is lower than for Et-En and Ro-En. This is the case for all approaches except NICT-Kyoto on Ru-De, where the performance at word-level is comparable to the results for Et-En and Ro-En, even though the Pearson correlation for sentence scores is inferior. We attribute this outcome to the use of supervision with synthetic data, which helps boost performance for word-level QE when no manually labelled data is available, as has been shown by Tuan et al. (2021) . Performance degradation for De-Zh is considerably larger than Ru-De. De-Zh was among the language pairs with the lowest inter-annotator agreement and, in addition, had a different distribution of sentence-level scores, with many high-quality translations, according to the annotators (see Section 2.1).",
"cite_spans": [
{
"start": 485,
"end": 503,
"text": "Tuan et al. (2021)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Limitations of the evaluation settings Our current evaluation settings can be further improved in various ways. First, the submissions were ranked according to the global statistics, i.e. by comparing the mean AUC, AP and Rec-TopK scores of different submissions over a common set of test instances. However, such aggregation mechanisms ignore how many of its competitors a given submission outperforms and on how many test instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In the future we plan to follow a more rigorous approach suggested by Peyrard et al. (2021) and use the Bradley-Terry (BT) model (Bradley and Terry, 1952) , which leverages the instance-level pairing of metric scores. Second, the metrics used for evaluation are tailored for unsupervised explainability approaches that produce continuous scores, but they do not allow a direct comparison with the SOTA work on word-level QE, which is evaluated using F-score and Matthews correlation coefficient (Specia et al., 2020) . One way to address this would be to require the participants to submit binary scores, but we discarded this option in this first edition of the shared task, as it would substantially limit the exploration of the explainability approaches.",
"cite_spans": [
{
"start": 70,
"end": 91,
"text": "Peyrard et al. (2021)",
"ref_id": "BIBREF35"
},
{
"start": 129,
"end": 154,
"text": "(Bradley and Terry, 1952)",
"ref_id": "BIBREF5"
},
{
"start": 495,
"end": 516,
"text": "(Specia et al., 2020)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Finally, the binary rationales obtained from our pool of annotators through majority voting do not capture the fact that some words are more relevant for sentence-level quality than others. As shown in Table 3 , an alternative version of the data can be produced by averaging the scores assigned to each word by individual annotators, as an indication of the severity of translated errors. In the future, we plan to study to what extent such scores agree with the continuous explanation scores produced by the participants. Another limitation of our annotation scheme is that sometimes a word may be missing in the machine translation, which can then not be highlighted (e.g., Russian does often not use determiners and the MT system may wrongly omit it when translating into English or German).",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 209,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In this paper, we presented the findings of the Eval4NLP-2021 shared task on explainable Quality Estimation (QE), where the goal is to not only produce a sentence-level score for an MT output, given a source sentence, but also highlight erroneous words in the target (and source) sentence explaining the score. We detailed the data annotation, involving two novel non-English language pairs, our baselines (post-hoc explanation techniques on top of state-of-the-art QE models), as well as the participants' approaches to the task. These include supervised approaches, training on synthetic data as well as genuine post-hoc and inherent explainability techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The scope for future research is huge: for example, we aim to include new language pairs, especially low-resource ones, address explainability for metrics in other NLP tasks, e.g. semantic textual similarity (Agirre et al., 2016) and summarization , and identify error categories of highlighted words, ideally in an unsupervised manner.",
"cite_spans": [
{
"start": 208,
"end": 229,
"text": "(Agirre et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "While QE is typically treated as a supervised task, a related research direction is reference-free evaluation, which",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "When there is an even number of annotators, we weight the annotations by annotator reliability measured using their average agreement with the other annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The interest towards MQM has recently increased due to a higher overall quality of MT(Freitag et al., 2021), but the aforementioned issues still remain unsolved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We did not ask the participants to normalize the scores, as we are only interested in the ranking of tokens according to their relevance for sentence-level quality.8 As shown byMcDonnell et al. (2017), rationales increase the reliability of human annotation when judging the relevance of webpages for information retrieval. In the future, we plan to investigate whether this also applies to MT evaluation and providing word-level explanations increases the consistency of sentence-level assessments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that XMoverScore is an unsupervised reference-free metric, in contrast to the supervised TransQuest QE model.10 Initially, there were seven participating teams, but one of them opted out after the competition ended.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This ranking is slightly different from the Codalab results, as one of the teams retracted from the competition.12 The only exception is HeyTUDa, which is outperformed by XMover-SHAP for De-Zh and by TransQuest-LIME for Et-En and Ro-En.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Marina Fomicheva was supported by funding from the Bergamot project (EU H2020 Grant No. 825303). Piyawat Lertvittayakumjorn was supported by a scholarship from Anandamahidol Foundation. We would like to thank Lisa Yankovskaya and Mark Fishel from the University of Tartu for helping organize and monitor the manual quality annotation. We also thank Anton Malinovskiy for adapting the Appraise interface for quality annotation with rationales. Finally, we gratefully thank the Artificial Intelligence Journal (https: //aij.ijcai.org/) and Salesforce Research for their financial support enabling our human annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SemEval-2016 task 2: Interpretable semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Montse",
"middle": [],
"last": "Maritxalar",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Larraitz",
"middle": [],
"last": "Uria",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "512--524",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Aitor Gonzalez-Agirre, I\u00f1igo Lopez- Gazpio, Montse Maritxalar, German Rigau, and Larraitz Uria. 2016. SemEval-2016 task 2: Inter- pretable semantic textual similarity. In Proceed- ings of the 10th International Workshop on Seman- tic Evaluation (SemEval-2016), pages 512-524, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A diagnostic study of explainability techniques for text classification",
"authors": [
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Jakob",
"middle": [
"Grue"
],
"last": "Simonsen",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Lioma",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.13295"
]
},
"num": null,
"urls": [],
"raw_text": "Pepa Atanasova, Jakob Grue Simonsen, Christina Li- oma, and Isabelle Augenstein. 2020. A diagnostic study of explainability techniques for text classifica- tion. arXiv preprint arXiv:2009.13295.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Confidence estimation for machine translation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blatz",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Simona",
"middle": [],
"last": "Gandrabur",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kulesza",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "315--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto San- chis, and Nicola Ueffing. 2004. Confidence esti- mation for machine translation. In COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pages 315-321, Geneva, Switzerland. COLING.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Proceedings of the Second Conference on Machine Translation",
"authors": [],
"year": null,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of the Second Conference on Ma- chine Translation, Volume 2: Shared Tasks Papers, Copenhagen, Denmark. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Rank analysis of incomplete block designs: I. the method of paired comparisons",
"authors": [
{
"first": "Allan",
"middle": [],
"last": "Ralph",
"suffix": ""
},
{
"first": "Milton",
"middle": [
"E"
],
"last": "Bradley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Terry",
"suffix": ""
}
],
"year": 1952,
"venue": "Biometrika",
"volume": "39",
"issue": "3/4",
"pages": "324--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Allan Bradley and Milton E. Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324- 345.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Detecting Fine-Grained Cross-Lingual Semantic Divergences without Supervision by Learning to Rank",
"authors": [
{
"first": "Eleftheria",
"middle": [],
"last": "Briakou",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1563--1580",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eleftheria Briakou and Marine Carpuat. 2020. De- tecting Fine-Grained Cross-Lingual Semantic Diver- gences without Supervision by Learning to Rank. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1563-1580, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A survey of the state of explainable AI for natural language processing",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Danilevsky",
"suffix": ""
},
{
"first": "Ranit",
"middle": [],
"last": "Kun Qian",
"suffix": ""
},
{
"first": "Yannis",
"middle": [],
"last": "Aharonov",
"suffix": ""
},
{
"first": "Ban",
"middle": [],
"last": "Katsis",
"suffix": ""
},
{
"first": "Prithviraj",
"middle": [],
"last": "Kawas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "447--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Danilevsky, Kun Qian, Ranit Aharonov, Yan- nis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A survey of the state of explainable AI for natural lan- guage processing. In Proceedings of the 1st Con- ference of the Asia-Pacific Chapter of the Associa- tion for Computational Linguistics and the 10th In- ternational Joint Conference on Natural Language Processing, pages 447-459, Suzhou, China. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "ERASER: A benchmark to evaluate rationalized NLP models",
"authors": [
{
"first": "Jay",
"middle": [],
"last": "Deyoung",
"suffix": ""
},
{
"first": "Sarthak",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Nazneen",
"middle": [],
"last": "Fatema Rajani",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Lehman",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4443--4458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4443-4458, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Towards a rigorous science of interpretable machine learning",
"authors": [
{
"first": "Finale",
"middle": [],
"last": "Doshi",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Velez",
"suffix": ""
},
{
"first": "Been",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.08608"
]
},
"num": null,
"urls": [],
"raw_text": "Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Explaining errors in machine translation with absolute gradient ensembles",
"authors": [
{
"first": "Melda",
"middle": [],
"last": "Eksi",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Gelbing",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Stieber",
"suffix": ""
},
{
"first": "Chi",
"middle": [
"Viet"
],
"last": "Vu",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melda Eksi, Erik Gelbing, Jonathan Stieber, and Chi Viet Vu. 2021. Explaining errors in machine translation with absolute gradient ensembles. In Pro- ceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Appraise: An open-source toolkit for manual evaluation of machine translation output",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
}
],
"year": 2012,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "98",
"issue": "",
"pages": "25--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Federmann. 2012. Appraise: An open-source toolkit for manual evaluation of machine translation output. The Prague Bulletin of Mathematical Lin- guistics, 98:25-35.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Translation error detection as rationale extraction",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Aletras",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Lucia Specia, and Nikolaos Aletras. 2021. Translation error detection as rationale extrac- tion.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Martins. 2020. MLQE-PE: A multilingual quality estimation and post-editing dataset",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Lopatina",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F"
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.04480"
]
},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Shuo Sun, Erick Fonseca, Fr\u00e9d\u00e9ric Blain, Vishrav Chaudhary, Francisco Guzm\u00e1n, Nina Lopatina, Lucia Specia, and Andr\u00e9 F. T. Mar- tins. 2020. MLQE-PE: A multilingual quality es- timation and post-editing dataset. arXiv preprint arXiv:2010.04480.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Findings of the WMT 2019 shared tasks on quality estimation",
"authors": [
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Federmann",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erick Fonseca, Lisa Yankovskaya, Andr\u00e9 F. T. Martins, Mark Fishel, and Christian Federmann. 2019. Find- ings of the WMT 2019 shared tasks on quality esti- mation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 1-10, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of human evaluation for machine translation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of human evaluation for machine translation.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "SU-PERT: Towards new frontiers in unsupervised evaluation metrics for multi-document summarization",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1347--1354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Gao, Wei Zhao, and Steffen Eger. 2020. SU- PERT: Towards new frontiers in unsupervised evalu- ation metrics for multi-document summarization. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1347- 1354, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Can machine translation systems be evaluated by the crowd alone",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Alistair",
"middle": [],
"last": "Moffat",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Zobel",
"suffix": ""
}
],
"year": 2016,
"venue": "Natural Language Engineering",
"volume": "",
"issue": "",
"pages": "1--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2016. Can machine translation sys- tems be evaluated by the crowd alone. Natural Lan- guage Engineering, FirstView:1-28.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English",
"authors": [
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Peng-Jen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "6098--6111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisco Guzm\u00e1n, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource ma- chine translation: Nepali-English and Sinhala- English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 6098-6111, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning to faithfully rationalize by construction",
"authors": [
{
"first": "Sarthak",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Pinter",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4459--4473",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, and By- ron C. Wallace. 2020. Learning to faithfully rational- ize by construction. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 4459-4473, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The umd submission to the explainable mt quality estimation shared task: Combining explanation models with sequence labeling",
"authors": [
{
"first": "Tasnim",
"middle": [],
"last": "Kabir",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tasnim Kabir and Marine Carpuat. 2021. The umd submission to the explainable mt quality estimation shared task: Combining explanation models with se- quence labeling. In Proceedings of the 2nd Work- shop on Evaluation and Comparison for NLP sys- tems.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Global explainability of bert-based metrics by disentangling along linguistic factors",
"authors": [
{
"first": "Marvin",
"middle": [],
"last": "Kaster",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": 2021,
"venue": "EMNLP 2021, Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marvin Kaster, Wei Zhao, and Steffen Eger. 2021. Global explainability of bert-based metrics by disen- tangling along linguistic factors. In EMNLP 2021, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Attention is not only a weight: Analyzing transformers with vector norms",
"authors": [
{
"first": "Goro",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Tatsuki",
"middle": [],
"last": "Kuribayashi",
"suffix": ""
},
{
"first": "Sho",
"middle": [],
"last": "Yokoi",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.10102"
]
},
"num": null,
"urls": [],
"raw_text": "Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, and Kentaro Inui. 2020. Attention is not only a weight: Analyzing transformers with vector norms. arXiv preprint arXiv:2004.10102.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Two-phase cross-lingual language model fine-tuning for machine translation quality estimation",
"authors": [
{
"first": "Dongjun",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "Shared Tasks Papers, Online. Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongjun Lee. 2020. Two-phase cross-lingual language model fine-tuning for machine translation quality es- timation. In Proceedings of the Fifth Conference on Machine Translation, Volume 2: Shared Tasks Papers, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Rationalizing neural predictions",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "107--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 107-117, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Reference-free word-and sentence-level translation evaluation with token-matching metrics",
"authors": [
{
"first": "Christoph",
"middle": [
"Wolfgang"
],
"last": "Leiter",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Wolfgang Leiter. 2021. Reference-free word-and sentence-level translation evaluation with token-matching metrics. In Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Assessing inter-annotator agreement for translation error annotation",
"authors": [
{
"first": "Arle",
"middle": [],
"last": "Lommel",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Popovic",
"suffix": ""
},
{
"first": "Aljoscha",
"middle": [],
"last": "Burchardt",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC Workshop on Automatic and Manual Metrics for Operational Translation Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arle Lommel, Maja Popovic, and Aljoscha Burchardt. 2014a. Assessing inter-annotator agreement for translation error annotation. In LREC Workshop on Automatic and Manual Metrics for Operational Translation Evaluation.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Multidimensional quality metrics (MQM): A framework for declaring and describing translation quality metrics. Tradum\u00e0tica: tecnologies de la traducci\u00f3",
"authors": [
{
"first": "Arle",
"middle": [],
"last": "Richard Lommel",
"suffix": ""
},
{
"first": "Aljoscha",
"middle": [],
"last": "Burchardt",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "0",
"issue": "",
"pages": "455--463",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arle Richard Lommel, Aljoscha Burchardt, and Hans Uszkoreit. 2014b. Multidimensional quality metrics (MQM): A framework for declaring and describing translation quality metrics. Tradum\u00e0tica: tecnolo- gies de la traducci\u00f3, 0(12):455-463.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A unified approach to interpreting model predictions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Su-In",
"middle": [],
"last": "Lundberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "4765--4774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott M Lundberg and Su-In Lee. 2017. A uni- fied approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4765-4774. Curran Associates, Inc.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The many benefits of annotator rationales for relevance judgments",
"authors": [
{
"first": "Tyler",
"middle": [],
"last": "Mcdonnell",
"suffix": ""
},
{
"first": "M\u00fccahid",
"middle": [],
"last": "Kutlu",
"suffix": ""
}
],
"year": 2017,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "4909--4913",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tyler McDonnell, M\u00fccahid Kutlu, Tamer Elsayed, and Matthew Lease. 2017. The many benefits of anno- tator rationales for relevance judgments. In IJCAI, pages 4909-4913.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Intelligent agent transparency in human-agent teaming for multi-uxv management",
"authors": [
{
"first": "E",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercado",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Jessie",
"middle": [
"Yc"
],
"last": "Rupp",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Katelyn",
"middle": [],
"last": "Barber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Procci",
"suffix": ""
}
],
"year": 2016,
"venue": "Human factors",
"volume": "58",
"issue": "3",
"pages": "401--415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph E Mercado, Michael A Rupp, Jessie YC Chen, Michael J Barnes, Daniel Barber, and Kate- lyn Procci. 2016. Intelligent agent transparency in human-agent teaming for multi-uxv management. Human factors, 58(3):401-415.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Better than average: Paired evaluation of NLP systems",
"authors": [
{
"first": "Maxime",
"middle": [],
"last": "Peyrard",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "West",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "2301--2315",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maxime Peyrard, Wei Zhao, Steffen Eger, and Robert West. 2021. Better than average: Paired evaluation of NLP systems. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2301-2315, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Explainable quality estimation: Cuni eval4nlp submission",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Pol\u00e1k",
"suffix": ""
},
{
"first": "Muskaan",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Pol\u00e1k, Muskaan Singh, and Ond\u0159ej Bojar. 2021. Explainable quality estimation: Cuni eval4nlp sub- mission. In Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "chrF: character n-gram F-score for automatic MT evaluation",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "372--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 372-375, Lisboa, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Transquest at wmt2020: Sentencelevel direct assessment",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2020,
"venue": "Shared Tasks Papers, Online. Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020. Transquest at wmt2020: Sentence- level direct assessment. In Proceedings of the Fifth Conference on Machine Translation, Volume 2: Shared Tasks Papers, Online. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "why should i trust you?\": Explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"why should i trust you?\": Explain- ing the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, page 1135-1144, New York, NY, USA. Asso- ciation for Computing Machinery.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Error identification for machine translation with metric embedding and attention",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": ""
},
{
"first": "Atsushi",
"middle": [],
"last": "Fujita",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Marie",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Rubino, Atsushi Fujita, and Benjamin Marie. 2021. Error identification for machine translation with metric embedding and attention. In Proceed- ings of the 2nd Workshop on Evaluation and Com- parison for NLP systems.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Restricting the flow: Information bottlenecks for attribution",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Sixt",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Tombari",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Landgraf",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.00396"
]
},
"num": null,
"urls": [],
"raw_text": "Karl Schulz, Leon Sixt, Federico Tombari, and Tim Landgraf. 2020. Restricting the flow: Informa- tion bottlenecks for attribution. arXiv preprint arXiv:2001.00396.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Learning important features through propagating activation differences",
"authors": [
{
"first": "Avanti",
"middle": [],
"last": "Shrikumar",
"suffix": ""
},
{
"first": "Peyton",
"middle": [],
"last": "Greenside",
"suffix": ""
},
{
"first": "Anshul",
"middle": [],
"last": "Kundaje",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "3145--3153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avanti Shrikumar, Peyton Greenside, and Anshul Kun- daje. 2017. Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning -Volume 70, ICML'17, page 3145-3153. JMLR.org.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Not just a black box: Learning important features through propagating activation differences",
"authors": [
{
"first": "Avanti",
"middle": [],
"last": "Shrikumar",
"suffix": ""
},
{
"first": "Peyton",
"middle": [],
"last": "Greenside",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Shcherbina",
"suffix": ""
},
{
"first": "Anshul",
"middle": [],
"last": "Kundaje",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. 2016. Not just a black box: Learning important features through propagating ac- tivation differences. CoRR, abs/1605.01713.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers",
"volume": "",
"issue": "",
"pages": "223--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Associa- tion for Machine Translation in the Americas: Tech- nical Papers, pages 223-231.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "SentSim: Crosslingual semantic evaluation of machine translation",
"authors": [
{
"first": "Yurun",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Junchen",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "3143--3156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yurun Song, Junchen Zhao, and Lucia Specia. 2021. SentSim: Crosslingual semantic evaluation of ma- chine translation. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 3143-3156, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Findings of the WMT 2020 shared task on quality estimation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "743--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Marina Fomicheva, Er- ick Fonseca, Vishrav Chaudhary, Francisco Guzm\u00e1n, and Andr\u00e9 F. T. Martins. 2020. Findings of the WMT 2020 shared task on quality estimation. In Proceedings of the Fifth Conference on Machine Translation, pages 743-764, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Findings of the WMT 2018 shared task on quality estimation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Ram\u00f3n",
"middle": [
"F"
],
"last": "Astudillo",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "689--709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Varvara Logacheva, Ram\u00f3n F. Astudillo, and Andr\u00e9 F. T. Martins. 2018a. Findings of the WMT 2018 shared task on quality estimation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 689-709, Belgium, Brussels. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Quality estimation for machine translation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [
"Henrique"
],
"last": "Paetzold",
"suffix": ""
}
],
"year": 2018,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "11",
"issue": "1",
"pages": "1--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Carolina Scarton, and Gustavo Henrique Paetzold. 2018b. Quality estimation for machine translation. Synthesis Lectures on Human Language Technologies, 11(1):1-162.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Axiomatic attribution for deep networks",
"authors": [
{
"first": "Mukund",
"middle": [],
"last": "Sundararajan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Qiqi",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "3319--3328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017a. Axiomatic attribution for deep networks. In International Conference on Machine Learning, pages 3319-3328. PMLR.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Axiomatic attribution for deep networks",
"authors": [
{
"first": "Mukund",
"middle": [],
"last": "Sundararajan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Qiqi",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "3319--3328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017b. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning -Volume 70, ICML'17, page 3319-3328. JMLR.org.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning",
"authors": [
{
"first": "Yuqing",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Chau",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peng-Jen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Na- man Goyal, Vishrav Chaudhary, Jiatao Gu, and An- gela Fan. 2020. Multilingual translation with exten- sible multilingual pretraining and finetuning.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "The relationship between trust in ai and trustworthy machine learning technologies",
"authors": [
{
"first": "Ehsan",
"middle": [],
"last": "Toreini",
"suffix": ""
},
{
"first": "Mhairi",
"middle": [],
"last": "Aitken",
"suffix": ""
},
{
"first": "Kovila",
"middle": [],
"last": "Coopamootoo",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Gonzalez Zelaya",
"suffix": ""
},
{
"first": "Aad",
"middle": [],
"last": "Van Moorsel",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 conference on fairness, accountability, and transparency",
"volume": "",
"issue": "",
"pages": "272--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehsan Toreini, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, and Aad Van Moorsel. 2020. The relationship between trust in ai and trustworthy machine learning technologies. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 272-283.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Ist-unbabel 2021 submission for the explainable quality estimation shared task",
"authors": [
{
"first": "Marcos",
"middle": [
"V"
],
"last": "Treviso",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nuno",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Guerreiro",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Rei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos V. Treviso, Nuno M. Guerreiro, Ricardo Rei, and Andr\u00e9 F.T. Martins. 2021. Ist-unbabel 2021 sub- mission for the explainable quality estimation shared task. In Proceedings of the 2nd Workshop on Evalu- ation and Comparison for NLP systems.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Quality estimation without humanlabeled data",
"authors": [
{
"first": "Yi-Lin",
"middle": [],
"last": "Tuan",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "El-Kishky",
"suffix": ""
},
{
"first": "Adithya",
"middle": [],
"last": "Renduchintala",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2102.04020"
]
},
"num": null,
"urls": [],
"raw_text": "Yi-Lin Tuan, Ahmed El-Kishky, Adithya Renduchin- tala, Vishrav Chaudhary, Francisco Guzm\u00e1n, and Lu- cia Specia. 2021. Quality estimation without human- labeled data. arXiv preprint arXiv:2102.04020.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Attention is not not explanation",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Pinter",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 11-20, Hong Kong, China. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Rethinking cooperative rationalization: Introspective extraction and complement control",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tommi",
"middle": [
"S"
],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.13294"
]
},
"num": null,
"urls": [],
"raw_text": "Mo Yu, Shiyu Chang, Yang Zhang, and Tommi S Jaakkola. 2019. Rethinking cooperative rationaliza- tion: Introspective extraction and complement con- trol. arXiv preprint arXiv:1910.13294.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Using \"annotator rationales\" to improve machine learning for text categorization",
"authors": [
{
"first": "Omar",
"middle": [],
"last": "Zaidan",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Piatko",
"suffix": ""
}
],
"year": 2007,
"venue": "Human language technologies 2007: The conference of the North American chapter of the association for computational linguistics; proceedings of the main conference",
"volume": "",
"issue": "",
"pages": "260--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using \"annotator rationales\" to improve machine learning for text categorization. In Human language technologies 2007: The conference of the North American chapter of the association for computa- tional linguistics; proceedings of the main confer- ence, pages 260-267.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Visualizing and understanding convolutional networks",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Zeiler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Vision -ECCV 2014",
"volume": "",
"issue": "",
"pages": "818--833",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Com- puter Vision -ECCV 2014, pages 818-833, Cham. Springer International Publishing.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Bertscore: Evaluating text generation with bert",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "On the limitations of cross-lingual encoders as exposed by reference-free machine translation evaluation",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Maxime",
"middle": [],
"last": "Peyrard",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "West",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1656--1671",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Goran Glava\u0161, Maxime Peyrard, Yang Gao, Robert West, and Steffen Eger. 2020. On the lim- itations of cross-lingual encoders as exposed by reference-free machine translation evaluation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1656- 1671, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Recall, precision and average precision",
"authors": [
{
"first": "Mu",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mu Zhu. 2004. Recall, precision and average precision. Department of Statistics and Actuarial Science, Uni- versity of Waterloo, Waterloo, 2(30):6.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Distribution of sentence-level scores for each language pair.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Not all annotators annotated all sentences for Ru-De and De-Zh. Individual annotators did 1411, 871, 1101, 1026 sentences for De-Zh, and 601, 1002, 1181, 1001 sentences for Ru-De.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "Screenshot of the annotation interface.",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"text": "",
"content": "<table><tr><td>: Total number of source tokens, target to-</td></tr><tr><td>kens, sentences and sentences with lower-than-perfect</td></tr><tr><td>sentence score (i.e. sentences with rationales) in the</td></tr><tr><td>Eval4NLP 2021 test set.</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF3": {
"text": "",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF5": {
"text": "Percentage of source and MT tokens annotated as rationales. For comparison, the percentage of source and target tokens annotated as errors in the same test partition of the MLQE-PE dataset is provided.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF7": {
"text": "Participants of the Eval4NLP Shared Task on Explainable Quality Estimation.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}