ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2021.eval4nlp-1.22.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:39.175506Z"
},
"title": "The UMD Submission to the Explainable MT Quality Estimation Shared Task: Combining Explanation Models with Sequence Labeling",
"authors": [
{
"first": "Tasnim",
"middle": [],
"last": "Kabir",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {
"settlement": "College Park"
}
},
"email": "[email protected]"
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {
"settlement": "College Park"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the UMD submission to the Explainable Quality Estimation Shared Task at the Eval4NLP 2021 Workshop on \"Evaluation & Comparison of NLP Systems\". We participated in the word-level and sentencelevel MT Quality Estimation (QE) constrained tasks for all language pairs: Estonian-English, Romanian-English, German-Chinese, and Russian-German. Our approach combines the predictions of a word-level explainer model on top of a sentence-level QE model and a sequence labeler trained on synthetic data. These models are based on pre-trained multilingual language models and do not require any word-level annotations for training, making them well suited to zero-shot settings. Our best performing system improves over the best baseline across all metrics and language pairs, with an average gain of 0.1 in AUC, Average Precision, and Recall at Top-K score.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the UMD submission to the Explainable Quality Estimation Shared Task at the Eval4NLP 2021 Workshop on \"Evaluation & Comparison of NLP Systems\". We participated in the word-level and sentencelevel MT Quality Estimation (QE) constrained tasks for all language pairs: Estonian-English, Romanian-English, German-Chinese, and Russian-German. Our approach combines the predictions of a word-level explainer model on top of a sentence-level QE model and a sequence labeler trained on synthetic data. These models are based on pre-trained multilingual language models and do not require any word-level annotations for training, making them well suited to zero-shot settings. Our best performing system improves over the best baseline across all metrics and language pairs, with an average gain of 0.1 in AUC, Average Precision, and Recall at Top-K score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Quality estimation (QE) is the task of predicting the quality of the machine translation (MT) output without reference translation. Predictions can be done at different levels of granularity, such as sentences or words. The explainable QE shared task (Fomicheva et al., 2021a) proposes to frame the identification of translation errors as an explainable QE task, where sentence-level quality judgments are explained by highlighting the words responsible for errors in the MT hypothesis. Given a source sentence and an MT hypothesis, systems are thus asked to provide word-level judgments of translation quality in addition to sentence-level judgments.",
"cite_spans": [
{
"start": 251,
"end": 276,
"text": "(Fomicheva et al., 2021a)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our submission builds on state-of-theart sentence-level QE models, MonoTran-sQuest (Ranasinghe et al., 2020a,b) . As suggested by the organizers, we rely on the LIME explanation model (Ribeiro et al., 2016) to obtain word-level prediction from the MonoTransQuest model's sentence-level score. We hypothesize that synthetic examples of translation errors can help improve word-level predictions. As a result, we combine the predictions of MonoTransQuest-LIME with those of the Divergent mBERT model which addresses the related task of detecting semantic divergences in bitext (Briakou and Carpuat, 2020) . Divergent mBERT model can detect fine-grained differences in bitext by learning to rank synthetic divergence examples of varying granularity. As a result, our approach does not require any word-level labels at training time. Both models are based on multilingual language models and are therefore amenable to zero-shot transfer.",
"cite_spans": [
{
"start": 83,
"end": 111,
"text": "(Ranasinghe et al., 2020a,b)",
"ref_id": null
},
{
"start": 184,
"end": 206,
"text": "(Ribeiro et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 575,
"end": 602,
"text": "(Briakou and Carpuat, 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our submitted system improves over its components and over the official baseline on all tracks and on all language pairs, based on all evaluation metrics (AUC, AP, Recall at Top-K, and Pearson's correlation). Compared to the best baseline system for target languages, it improves AUC by 0.119 for Estonian-English (Et-En), 0.068 for Romanian-English (Ro-En), 0.085 for German-Chinese (De-Zh), and 0.128 for Russian-German (Ru-De). Similarly, for AP score, it has achieved an improvement of 0.095 for Et-En, 0.074 for Ro-En, 0.064 for De-Zh, and 0.13 for Ru-De. For Recall at Top-K score, it has achieved an improvement of 0.103 for Et-En, 0.071 for Ro-En, 0.045 for De-Zh, and 0.13 for Ru-De. For source language word-level scores, it achieves an average gain of 0.18 for AUC, 0.071 for AP and 0.12 for Recall at Top-K score over the average of all languages' baseline scores. Finally, for sentence-level scores, it has achieved an improvement of 0.36 for Et-En, 0.359 for Ro-En, 0.271 for De-Zh, and 0.06 for Ru-De compared to the average of all baseline models for Pearson's correlation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first describe the two components of our ensemble and then explain how they are combined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "The first ensemble component is based on one of the baselines provided by the organizers. It uses MonoTransQuest, the state-of-the-art model in the WMT 2020 QE shared task (Ranasinghe et al., 2020a,b) , including for mid-resource and high-resource language pairs. This model uses a single XLM-Roberta transformer model (Ranasinghe et al., 2020a,b) trained with data released in WMT quality estimation tasks in recent years. The input of the model is the concatenation of the original sentence x source and its translation x target , separated by the [SEP ] token. Therefore, x = x source , [SEP ] , x target and the model used the embedding of the [CLS] token as the input of a softmax layer, and this layer F predicts the sentence-level score F (x) of the translation at the sentence-level. Mean-squared-error loss is used as the objective function.",
"cite_spans": [
{
"start": 172,
"end": 200,
"text": "(Ranasinghe et al., 2020a,b)",
"ref_id": null
},
{
"start": 319,
"end": 347,
"text": "(Ranasinghe et al., 2020a,b)",
"ref_id": null
},
{
"start": 590,
"end": 596,
"text": "[SEP ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MonoTransQuest-LIME Model",
"sec_num": "2.1"
},
{
"text": "For generating word-level scores from the sentence-level scores, the toolkit LIME is suggested by the organizers. LIME explains the predictions of a black-box model by providing a local linear approximation of the model's behavior. For generating an explanation for a prediction, LIME generates neighborhood data by randomly hiding features from the instance and then learns locally weighted linear models on this neighborhood data to explain each of the classes in an interpretable way 1 . Here, LIME treats words in the input sequence as features and thus lets us generate word-level QE scores from the MonoTransQuest sentence-level QE predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MonoTransQuest-LIME Model",
"sec_num": "2.1"
},
{
"text": "We use existing pre-trained MonoTransQuest models. Ranasinghe et al. (2020a,b ) note that the QE task can be challenging in the practical environment where the systems have to work in a multilingual setting, so selecting appropriate models for each language pair is key. As summarized in Table 1 , for the development languages (et-en, ro-en), we select existing MonoTransQuest models trained on the language pair tested. For the zero-shot test languages (de-zh, ru-de), we select existing MonoTransQuest models trained on language pairs that involve one of the two languages and English (en-zh and en-de, respectively).",
"cite_spans": [
{
"start": 51,
"end": 77,
"text": "Ranasinghe et al. (2020a,b",
"ref_id": null
}
],
"ref_spans": [
{
"start": 288,
"end": 295,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "MonoTransQuest-LIME Model",
"sec_num": "2.1"
},
{
"text": "Briakou and Carpuat (2020) introduced the Divergent mBERT model which is a BERT-based model that can detect cross-lingual semantic divergences by ranking synthetic divergences of varying granularity without supervision. Cross-lingual semantic divergence refers to the difference in meaning between sentences written in different languages (Vyas et al., 2018) and therefore might correspond to some adequacy errors observed in MT output.",
"cite_spans": [
{
"start": 339,
"end": 358,
"text": "(Vyas et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Divergent mBERT",
"sec_num": "2.2"
},
{
"text": "The Divergent mBERT model is designed to make both sentence-level and word-level predictions. The input of this model is a sequence x generated by concatenating an English sentence x e and a French sentence x f with helper delimiter tokens. Therefore,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergent mBERT",
"sec_num": "2.2"
},
{
"text": "x = ([CLS], x e , [SEP ], x f , [SEP ]).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergent mBERT",
"sec_num": "2.2"
},
{
"text": "Here, the [CLS] token serves as the representative for the sentence-pair x which is passed through a feed-forward network F to get the score F (x) which is converted into the probability that x is equivalent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergent mBERT",
"sec_num": "2.2"
},
{
"text": "For word-level prediction, the final hidden state h t is passed through a feed-forward layer and a softmax layer for each token y t in encoded sentence pair x. This produces the probability that the token y t belongs to the equivalent class. For sentencelevel prediction, the model uses margin-loss and for token-level prediction, it uses cross-entropy loss of all tokens. The word-level evaluation on this model found that it outperforms Random Baseline across all metrics. Therefore, this model proves that we can benefit from training even with noisy word-level labels. We can map this task to identifying the error in the word-level QE by marking all divergences as errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergent mBERT",
"sec_num": "2.2"
},
{
"text": "We made a small change to the original Divergent mBERT model by fine-tuning XLM-Roberta (Conneau et al., 2020) rather than mBERT, and keeping the rest of the model architecture, loss definition, and training data unchanged. As a result, this model is trained on French-English sentence pairs, where positive examples of equivalence are drawn from bitext with a filtering step to ensure that they are not noisy, and negative samples are automatically generated by corrupting the positive samples to introduce meaning mismatches (e.g., by deleting dependency subtrees in one language, substituting words with near-synonyms, or phrases with other phrases that have the same syntactic structure). As a result, this model is used in zeroshot settings for all the test languages of the shared task and does not use any manual QE annotation.",
"cite_spans": [
{
"start": 88,
"end": 110,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Divergent mBERT",
"sec_num": "2.2"
},
{
"text": "We adopt the approach of Kepler et al. (2019) for building ensemble models for word-level quality estimation, which simply averages the predictions of the ensemble components. While their ensemble had five models, we average the predictions of the two models above, either at the sentence or word level. Given a source sentence (src) and the machine translation hypothesis (mt), Divergent mBERT and MonoTransQuest-LIME produce word-level scores for each word in the MT hypothesis. These are averaged to produce the final wordlevel score. The same process is used to combine sentence-level predictions. The overall system architecture is shown in Figure 1 for word-level predictions. The input src-mt represents the language pair for which the sentence-level and the word-level score are being generated.",
"cite_spans": [
{
"start": 25,
"end": 45,
"text": "Kepler et al. (2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 646,
"end": 654,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Ensembling Method",
"sec_num": "2.3"
},
{
"text": "In this section, we describe the data used for training, development, and evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "We use ensemble components that have been pretrained on different datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3.1"
},
{
"text": "MonoTransQuest We use the original models that have been publicly released. They were trained on publicly available datasets from recent WMT sentence-level quality estimation tasks (Specia et al., 2018; Fonseca et al., 2019; . These datasets were collected from Wikipedia and Reddit. In this setup, the Et-En and Ro-En are considered as medium resource language and En-Zh and En-De are considered as high resource language pairs. In Table 1 , we can see the lists the training data used to train the original pre-trained MonoTransQuest models. We can note that as there were no De-Zh and Ru-De language pairs used, thus, this model supports prediction in a zero-shot setting.",
"cite_spans": [
{
"start": 181,
"end": 202,
"text": "(Specia et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 203,
"end": 224,
"text": "Fonseca et al., 2019;",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 433,
"end": 440,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3.1"
},
{
"text": "Divergent mBERT We used the same training data as the original model by Briakou and Carpuat (2020) . The training data was the English and French text from WikiMatrix which was normalized with Moses toolkit and tokenized. In our model, we have used \"XLMRobertaTokenizer\" where the original model used \"BERTTokenizer\". Similar to the original model, the alignment of English and French bitext was done using Berkeley word aligner. After filtering the noisy samples, the top 5500 samples, ranked by LASER similarity score, were picked, and then the synthetic divergent examples were generated. The synthetic data was generated similar to the original model's synthetic data generation process which is: subtree deletion by deleting a randomly selected subtree in the dependency parse of the English sentence, or French words aligned to English words in that subtree, Phrase Replacement by substituting random source or target sequences by another sequence of words with matching POS tags and lexical substitution by substituting English words with hypernyms or hyponyms from WordNet.",
"cite_spans": [
{
"start": 72,
"end": 98,
"text": "Briakou and Carpuat (2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3.1"
},
{
"text": "We used the official shared task development data, which is drawn from the Multilingual Quality Estimation and Post-Editing (MLQE-PE) dataset . There are two language pairs in the development set, with 1000 sentences each: Estonian-English (Et-En) and Romanian-English (Ro-En).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development Data",
"sec_num": "3.2"
},
{
"text": "The test data included four language pairs: Estonian-English (Et-En), Romanian-English (Ro-En), German-Chinese (De-Zh), and Russian-German (Ru-De). The first two language pairs are the same as the development set language pairs. German-English (De-Zh) and Russian-German (Ru-De) language pairs are zero-shot languages since they were not available in the development phase. Test set statistics are given in Table 2 . Models are evaluated against human annotations by submitting to the official leaderboard. ",
"cite_spans": [],
"ref_spans": [
{
"start": 407,
"end": 414,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Test Data",
"sec_num": "3.3"
},
{
"text": "We used the same set of system configurations for all the language pairs in our experiment to ensure consistency among all language pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Configuration",
"sec_num": "4"
},
{
"text": "MonoTransQuest We have used pre-trained MonoTransQuestmodel on the HuggingFace Transformers library (Wolf et al., 2019) . We used those pre-trained MonoTranquest models 2 . We have not changed any hyperparameter from those models.",
"cite_spans": [
{
"start": 100,
"end": 119,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Configuration",
"sec_num": "4"
},
{
"text": "Divergent mBERT For training Divergent mBERT we have used a batch size of 16, Adam optimizer with learning rate 2e \u22125 and a linear rate warmup. The model was trained with only training data. The model was trained for five epochs. We have varied the hyperparameter settings: epoch was varied from 3 to 15 epochs, the margin was varied from 5 to 10, the alpha value was varied from 0.2 to 1. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Configuration",
"sec_num": "4"
},
{
"text": "This section describes the official evaluation metrics for the shared task. For sentence-level scores, Pearson's correlation is used and for word-level scores, AUC, AP, and Recall at Top-K are used:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "\u2022 Pearson's Correlation: This measures the strength and direction of a linear relationship between the model predicted sentencelevel score and human-annotated sentence-level score. Values always range between -1 (strong negative relationship) and +1 (strong positive relationship).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "\u2022 AUC Score: In this shared task, the AUC score between the model predicted output and gold explanation MT score is computed using sklearn 3 . Given a test set of N sentences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "AU C = 1 N n AU C n (w n , a x T n )",
"eq_num": "(1)"
}
],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "Here, w n is a vector representing binary gold word-level labels for each sentence n in the test set and a x T n is the vector for the model predicted word-level score for the target words x T in each target sentence in test set with length T . Equation 1 computes the AUC score to compare the model predicted word-level scores a against binary gold labels (Fomicheva et al., 2021b) . Here, AU C n is the area under the curve generated by plotting the true positive against false positive of the word-level scores of the n th sentence at different thresholds. AUC ranges in value from 0 to 1. A model whose predictions are 100% wrong has an AUC of 0.0; one whose predictions are 100% correct has an AUC of 1.0.",
"cite_spans": [
{
"start": 357,
"end": 382,
"text": "(Fomicheva et al., 2021b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "\u2022 AP Score: AP (Average Precision) evaluates word-level predictions and complements AUC scores which can be overly optimistic for imbalanced data (Fomicheva et al., 2021b) . Average precision 4 is defined as:",
"cite_spans": [
{
"start": 146,
"end": 171,
"text": "(Fomicheva et al., 2021b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "AP = n (R n \u2212 R n\u22121 )P n",
"eq_num": "(2)"
}
],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "where P n and R n are the precision and recall at the n th threshold, where words are assigned to the positive class if the model predicts a score for this word that is higher than the n th threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "\u2022 Recall at Top-K: This metric checks whether the highest predicted values have been assigned to the words corresponding to actual errors. For example, if the gold standard output is 1, 1, 0, 0, 0, 0, 0 then the recall value checks whether in the model predicted output, the highest values have been assigned to the first and second word. Specifically this metric computes the proportion of words with the highest attribution corresponding to errors against the total number of errors in the MT output (Fomicheva et al., 2021b) :",
"cite_spans": [
{
"start": 502,
"end": 527,
"text": "(Fomicheva et al., 2021b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "Recall at T op -K = 1 k j\u2208e 1:k w j (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "Here, k is the number of errors in the sentence and e = argsort(a x T ) is the sequence of highest to lowest sorted indexes of target words according to the attribution scores. The final score is the average over test instances, and ranges from 0 to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "We describe the performance of our models on the development and test sets using the official shared task metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The results achieved by each of the organizers provided baseline systems, Divergent mBERT, and our ensemble method is described in Table 4 . This table described the results on the Et-En and Ro-En language pairs of the validation set. Our ensemble model outperforms the baselines, as well as its components according to all metrics. We can see a consistent improvement for all the language pairs in the development set. In comparison with the average of all baselines, for Et-En language pair, on target word-level scores, our ensemble method achieves an improvement of 0.15 in AUC score, 0.17 in AP score, and 0.17 in Recall at Top-K score. Similarly, for the source language, it achieves an improvement of 0.19 in the AUC score, 0.123 in AP score, and 0.2 in Recall at Top-K score over the average of all baselines. Similarly, For the Ro-En language pair, on target word-level scores, our ensemble method achieves an improvement of 0.15 in AUC score, 0.21 in AP score, and 0.231 in Recall at Top-K score over the average of all baselines. Similarly, for the source language, it achieves an improvement of 0.185 in the AUC score, 0.156 in AP score, and 0.23 in Recall at Top-K score over the average of all baselines. The Divergent mBERT model has a smaller but consistent advantage over all the baseline models.",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Development Set Results",
"sec_num": "6.1"
},
{
"text": "If we take the average on baselines on Et-En, this model achieves an improvement of 0.126 on AUC, 0.043 on AP, and 0.126 on Recall at Top-K score. Similarly, on the average of all baselines for Ro-En, this model achieves an improvement of 0.152 on AUC, 0.1 on AP, and 0.187 on Recall at Top-K score. Overall, these results suggest that Divergent mBERT and MonoTransQuest have complementary strengths, which benefit the ensemble.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development Set Results",
"sec_num": "6.1"
},
{
"text": "We illustrate the complementarity of the ensemble components with randomly selected examples from the Ro-En development set in Table 5 . The Divergent mBERT model predicts better error labels for short sentences than the MonoTransQuestmodel. However, MonoTransQuest is more accurate on longer or more complex sentences. De-Zh For De-Zh language pair, on target wordlevel scores, our ensemble method achieves an improvement of 0.13 in AUC score, 0.1 in AP score, and 0.09 in Recall at Top-K score over the average of all baselines. Similarly, for the source language, it achieves an improvement of 0.123 in the AUC score, 0.01 in AP score, and 0.08 in Recall at Top-K score over the average of all baselines. The acquirer is not a deposit exception Gold word-level label 1 1 0 0 1 1 1 MonoTransQuest label -0.034 -0.090 -0.043 -0.023 0.006 -0.039 -0.096 Divergent mBERT label 1 1 0 0 0 0 0 Source Sentence 2 Dac\u0203 IA este programat\u0203 pentru \" \" obiectivele pot fi induse implicit prin recompensarea unor tipuri de comportament sau prin pedepsirea altora.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Development Set Results",
"sec_num": "6.1"
},
{
"text": "This is because if IA is scheduled for another the objectives can be induced implicitly by rewarding some types of behaviour or by punishing others. Discussion Taken together, these results show that the ensemble method performs similarly for those language pairs that are used in the training phase (Et-En, Ro-En language pairs) and for the zero-shot language pairs (De-Zh, Ru-De). It has an average improvement of 0.15 in AUC, 0.18 in AP, and 0.16 in Recall at Top-K score for those language pairs which is in its training data and an average improvement of 0.16 in AUC, 0.13 in AP, and 0.12 in Recall at Top-K score for those language pairs which it is not trained on. Therefore, we can see we can have a consistent gain for all language pairs with this method without including that particular language pair in the training data. The Divergent mBERT model is effective on all language pairs, even though it is trained on synthetic data generated from English-French bitext: this suggests that the word-level weak supervision provided by the synthetic samples is robust, although it would be interesting to investigate the impact of the choice of training languages further in future work. Secondly, a clear takeaway is that the ensembling of different systems can give large gains, even if some of the subsystems are weak individually and even in zero-shot settings. Table 7 contains the results for the sentence-level submission on the test set. Evaluated on Pearson's correlation, our ensemble method has a consistent improvement of 0.36 for Et-En, 0.359 for Ro-En, 0.271 for De-Zh, and 0.06 for Ru-De compared to the average of all baseline models. However, in a zero-shot setting, Pearson's correlation varies significantly between different language pairs. We observe that the MonoTransQuest baseline achieves better performance on test language pairs, which is not surprising since it was trained on all the language pairs of WMT. This impacts results on the zero-shot languages: for De-Zh MonoTran-sQuest outperforms Divergent mBERT by a large margin, but the ensemble still benefits from Divergent mBERT. For Ru-De, the MonoTransQuest achieve the strongest level correlation and Divergent mBERT does not improve over it when added to the ensemble, unlike for word-level predictions.",
"cite_spans": [],
"ref_spans": [
{
"start": 1371,
"end": 1378,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Target Sentence 2",
"sec_num": null
},
{
"text": "The Divergent mBERT model has unequal performance across languages. It achieves the highest Pearson Correlation for Ro-En, which is the closest language pair to the one it is trained on (English-French) but performs poorly for Estonian-English and German-Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-level Scores on Test Set",
"sec_num": "6.3"
},
{
"text": "We described the University of Maryland's contribution to the Eval4NLP 2021 Shared Task on Quality Estimation. Our submission was based on ensembling existing models: (1) the state-of-theart framework MonoTransQuest model followed by the LIME explanation model, and (2) an mBERT model trained to detect cross-lingual semantic divergences. We show that averaging the prediction of these models outperforms all the baselines and their individual predictions, even though none of the ensemble components are trained with wordlevel supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Overall, our approach shows the benefits of leveraging pre-trained multilingual LMs to port to multiple language pairs, including in zero-shot settings: the Divergent mBERT component is even trained on a language pair that is not used for any of the test tasks. In the future, training Divergent mBERT with other language pairs can lead to more promising results. This work also shows the complementarity of explanation models and of sequence labelers trained on synthetic data for word-level predictions. In future work, controlled comparison of these approaches on the same languages and data conditions can lead to further insights on their respective strengths and weaknesses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://huggingface.co/MonoTransQuest",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://developers.google.com/machine-learning/crashcourse/classification/roc-and-auc 4 https://scikit-learn.org/stable/modules/generated/ sklearn.metrics.average_precision_score.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Detecting fine-grained cross-lingual semantic divergences without supervision by learning to rank",
"authors": [
{
"first": "Eleftheria",
"middle": [],
"last": "Briakou",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1563--1580",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eleftheria Briakou and Marine Carpuat. 2020. Detect- ing fine-grained cross-lingual semantic divergences without supervision by learning to rank. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1563-1580, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The eval4nlp shared task on explainable quality estimation: Overview and results",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Piyawat",
"middle": [],
"last": "Lertvittayakumjorn",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Piyawat Lertvittayakumjorn, Wei Zhao, Steffen Eger, and Yang Gao. 2021a. The eval4nlp shared task on explainable quality estima- tion: Overview and results. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "and Nikolaos Aletras. 2021b. Translation error detection as rationale extraction",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2108.12197"
]
},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Lucia Specia, and Nikolaos Aletras. 2021b. Translation error detection as rationale ex- traction. arXiv preprint arXiv:2108.12197.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Mlqe-pe: A multilingual quality estimation and post-editing dataset",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Lopatina",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Andr\u00e9 Ft",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.04480"
]
},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Shuo Sun, Erick Fonseca, Fr\u00e9d\u00e9ric Blain, Vishrav Chaudhary, Francisco Guzm\u00e1n, Nina Lopatina, Lucia Specia, and Andr\u00e9 FT Martins. 2020. Mlqe-pe: A multilingual quality esti- mation and post-editing dataset. arXiv preprint arXiv:2010.04480.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Findings of the WMT 2019 shared tasks on quality estimation",
"authors": [
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Federmann",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5401"
]
},
"num": null,
"urls": [],
"raw_text": "Erick Fonseca, Lisa Yankovskaya, Andr\u00e9 F. T. Martins, Mark Fishel, and Christian Federmann. 2019. Find- ings of the WMT 2019 shared tasks on quality esti- mation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 1-10, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unbabel's participation in the wmt19 translation quality estimation shared task",
"authors": [
{
"first": "F\u00e1bio",
"middle": [],
"last": "Kepler",
"suffix": ""
},
{
"first": "Jonay",
"middle": [],
"last": "Tr\u00e9nous",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vera",
"suffix": ""
},
{
"first": "Ant\u00f3nio",
"middle": [],
"last": "G\u00f3is",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Amin Farajian",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Ant\u00f3nio",
"suffix": ""
},
{
"first": "Andr\u00e9 Ft",
"middle": [],
"last": "Lopes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F\u00e1bio Kepler, Jonay Tr\u00e9nous, Marcos Treviso, Miguel Vera, Ant\u00f3nio G\u00f3is, M Amin Farajian, Ant\u00f3nio V Lopes, and Andr\u00e9 FT Martins. 2019. Unbabel's par- ticipation in the wmt19 translation quality estima- tion shared task. WMT 2019, page 80.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Transquest at wmt2020: Sentencelevel direct assessment",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020a. Transquest at wmt2020: Sentence- level direct assessment. In Proceedings of the Fifth Conference on Machine Translation.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Transquest: Translation quality estimation with cross-lingual transformers",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020b. Transquest: Translation quality esti- mation with cross-lingual transformers. In Proceed- ings of the 28th International Conference on Com- putational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "why should i trust you?\" explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \" why should i trust you?\" explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD international con- ference on knowledge discovery and data mining, pages 1135-1144.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Findings of the WMT 2020 shared task on quality estimation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "743--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Marina Fomicheva, Er- ick Fonseca, Vishrav Chaudhary, Francisco Guzm\u00e1n, and Andr\u00e9 F. T. Martins. 2020. Findings of the WMT 2020 shared task on quality estimation. In Proceedings of the Fifth Conference on Machine Translation, pages 743-764, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Findings of the wmt 2018 shared task on quality estimation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Ram\u00f3n",
"middle": [],
"last": "Astudillo",
"suffix": ""
},
{
"first": "Andr\u00e9 Ft",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "689--709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Varvara Logacheva, Ram\u00f3n Astudillo, and Andr\u00e9 FT Martins. 2018. Findings of the wmt 2018 shared task on quality es- timation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 689-709.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Identifying semantic divergences in parallel text without annotations",
"authors": [
{
"first": "Yogarshi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1503--1515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yogarshi Vyas, Xing Niu, and Marine Carpuat. 2018. Identifying semantic divergences in parallel text without annotations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1503-1515.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Fun- towicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "System architecture of the ensemble method.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Et-En : For Et-En language pair, on target wordlevel scores, our ensemble method achieves an improvement of 0.164 in AUC score, 0.186 in AP score, and 0.181 in Recall at Top-K score over the average of all baselines. Similarly, for the source language, it achieves an improvement of 0.236 in the AUC score, 0.13 in AP score, and 0.208 in Recall at Top-K score over the average of all baselines.Ro-En For Ro-En language pair, on target wordlevel scores, our ensemble method achieves an improvement of 0.129 in AUC score, 0.173 in AP score, and 0.135 in Recall at Top-K score over the average of all baselines. Similarly, for the source language, it achieves an improvement of 0.22 in the AUC score, 0.102 in AP score, and 0.193 in Recall at Top-K score over the average of all baselines.",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "MonoTransQuest models used for each task."
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Test data statistics."
},
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td>Batch Size</td><td>16</td></tr><tr><td>Loss Function</td><td>Sentence-level: Margin Loss Token-level: Cross Entropy</td></tr><tr><td>Optimizer</td><td>AdamW</td></tr><tr><td colspan=\"2\">Learning Rate 2e \u22125</td></tr><tr><td>Scheduler</td><td>Linear Schedule with Warmup</td></tr><tr><td>Epoch</td><td>5</td></tr><tr><td>Margin</td><td>5</td></tr><tr><td>Alpha</td><td>1</td></tr></table>",
"html": null,
"num": null,
"text": "is the list of the final hyper-parameter settings we used for training in our experiments."
},
"TABREF5": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Divergent mBERT hyperparameters."
},
"TABREF6": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "summarizes model performance for all the language pairs in the test set. Consistent with the development set results, the ensemble improves over all the baselines, and outperforms each of its components."
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td>Pair</td><td>System</td><td/><td>Target</td><td colspan=\"2\">Word-level Score</td><td>Source</td><td/><td>Sentence-level Score</td></tr><tr><td/><td/><td>AUC</td><td>AP</td><td>Recall</td><td>AUC</td><td>AP</td><td>Recall</td><td>Pearson's</td></tr><tr><td/><td>Random (Baseline 1)</td><td>0.505</td><td>0.387</td><td>0.284</td><td>0.496</td><td>0.380</td><td>0.249</td><td>-0.048</td></tr><tr><td/><td>XMover-SHAP (Baseline 2)</td><td>0.583</td><td>0.456</td><td>0.352</td><td>0.513</td><td>0.394</td><td>0.262</td><td>0.415</td></tr><tr><td>Et-En</td><td>TransQuest-LIME (Baseline 3)</td><td>0.592</td><td>0.510</td><td>0.402</td><td>-1.00</td><td>-1.00</td><td>-1.00</td><td>0.722</td></tr><tr><td/><td>Divergent mBERT</td><td>0.686</td><td>0.494</td><td>0.472</td><td>0.608</td><td>0.403</td><td>0.357</td><td>0.572</td></tr><tr><td/><td>Ensemble</td><td>0.710</td><td>0.621</td><td>0.515</td><td>0.695</td><td>0.510</td><td>0.459</td><td>0.772</td></tr><tr><td/><td>Random (Baseline 1)</td><td>0.488</td><td>0.359</td><td>0.239</td><td>0.505</td><td>0.374</td><td>0.254</td><td>-0.021</td></tr><tr><td/><td>XMover-SHAP (Baseline 2)</td><td>0.638</td><td>0.464</td><td>0.339</td><td>0.541</td><td>0.384</td><td>0.265</td><td>0.638</td></tr><tr><td>Ro-En</td><td>TransQuest-LIME (Baseline 3)</td><td>0.619</td><td>0.552</td><td>0.439</td><td>-1.00</td><td>-1.00</td><td>-1.00</td><td>0.882</td></tr><tr><td/><td>Divergent mBERT</td><td>0.734</td><td>0.557</td><td>0.526</td><td>0.618</td><td>0.412</td><td>0.372</td><td>0.742</td></tr><tr><td/><td>Ensemble</td><td>0.728</td><td>0.664</td><td>0.570</td><td>0.708</td><td>0.535</td><td>0.486</td><td>0.890</td></tr></table>",
"html": null,
"num": null,
"text": "Ru-De For Ru-De language pair, on target wordlevel scores, our ensemble method achieves an im-"
},
"TABREF8": {
"type_str": "table",
"content": "<table><tr><td>Source Sentence 1</td><td>Dobridorul nu este o except , ie \u00een ceea ce prives , te depozitele de</td></tr><tr><td>Target Sentence 1</td><td/></tr></table>",
"html": null,
"num": null,
"text": "Word-level and sentence-level scores on development data. The baseline scores are taken from the leaderboard. Best results for each language by any method are marked in bold."
},
"TABREF9": {
"type_str": "table",
"content": "<table><tr><td>Pair</td><td>System</td><td>Training Data</td><td>AUC</td><td>Target AP</td><td>Recall</td><td>AUC</td><td>Source AP</td><td>Recall</td></tr><tr><td/><td>Random (Baseline 1)</td><td>-</td><td>0.497</td><td>0.358</td><td>0.274</td><td>0.487</td><td>0.339</td><td>0.194</td></tr><tr><td/><td>XMover-SHAP (Baseline 2)</td><td>Et-En</td><td>0.616</td><td>0.441</td><td>0.338</td><td>0.535</td><td>0.371</td><td>0.231</td></tr><tr><td>Et-En</td><td>TransQuest-LIME (Baseline 3)</td><td>Et-En</td><td>0.624</td><td>0.536</td><td>0.424</td><td>0.544</td><td>0.440</td><td>0.309</td></tr><tr><td/><td>Divergent mBERT</td><td>En-Fr</td><td>0.725</td><td>0.536</td><td>0.493</td><td>0.544</td><td>0.440</td><td>0.309</td></tr><tr><td/><td>Ensemble Method</td><td>Et-En, En-Fr</td><td>0.743</td><td>0.631</td><td>0.527</td><td>0.758</td><td>0.514</td><td>0.453</td></tr><tr><td/><td>Random (Baseline 1)</td><td>-</td><td>0.516</td><td>0.311</td><td>0.187</td><td>0.500</td><td>0.280</td><td>0.150</td></tr><tr><td/><td>XMover-SHAP (Baseline 2)</td><td>Ro-En</td><td>0.666</td><td>0.438</td><td>0.295</td><td>0.534</td><td>0.292</td><td>0.148</td></tr><tr><td>Ro-En</td><td>TransQuest-LIME (Baseline 3)</td><td>Ro-En</td><td>0.634</td><td>0.523</td><td>0.415</td><td>0.478</td><td>0.351</td><td>0.243</td></tr><tr><td/><td>Divergent mBERT</td><td>En-Fr</td><td>0.717</td><td>0.462</td><td>0.452</td><td>0.478</td><td>0.351</td><td>0.243</td></tr><tr><td/><td>Ensemble Method</td><td>Ro-En, En-Fr</td><td>0.734</td><td>0.597</td><td>0.486</td><td>0.724</td><td>0.410</td><td>0.373</td></tr><tr><td/><td>Random (Baseline 1)</td><td>-</td><td>0.496</td><td>0.294</td><td>0.174</td><td>0.500</td><td>0.300</td><td>0.174</td></tr><tr><td/><td>XMover-SHAP (Baseline 2)</td><td>WMT's all language pairs</td><td>0.545</td><td>0.334</td><td>0.220</td><td>0.474</td><td>0.287</td><td>0.159</td></tr><tr><td>De-Zh</td><td>TransQuest-LIME (Baseline 3)</td><td>WMT's all language pairs</td><td>0.460</td><td>0.271</td><td>0.145</td><td>0.486</td><td>0.317</td><td>0.196</td></tr><tr><td/><td>Divergent mBERT</td><td>En-Fr</td><td>0.556</td><td>0.303</td><td>0.238</td><td>0.478</td><td>0.351</td><td>0.243</td></tr><tr><td/><td>Ensemble Method</td><td>En-Zh, En-Fr</td><td>0.630</td><td>0.400</td><td>0.265</td><td>0.610</td><td>0.311</td><td>0.252</td></tr><tr><td/><td>Random (Baseline 1)</td><td>-</td><td>0.492</td><td>0.308</td><td>0.216</td><td>0.506</td><td>0.341</td><td>0.237</td></tr><tr><td/><td>XMover-SHAP (Baseline 2)</td><td>WMT's all language pairs</td><td>0.522</td><td>0.328</td><td>0.224</td><td>0.522</td><td>0.356</td><td>0.259</td></tr><tr><td>Ru-De</td><td>TransQuest-LIME (Baseline 3)</td><td>WMT's all language pairs</td><td>0.404</td><td>0.262</td><td>0.164</td><td>0.534</td><td>0.427</td><td>0.320</td></tr><tr><td/><td>Divergent mBERT</td><td>En-Fr</td><td>0.579</td><td>0.418</td><td>0.321</td><td>0.478</td><td>0.351</td><td>0.243</td></tr><tr><td/><td>Ensemble Method</td><td>En-De, En-Fr</td><td>0.650</td><td>0.458</td><td>0.354</td><td>0.658</td><td>0.413</td><td>0.373</td></tr></table>",
"html": null,
"num": null,
"text": "Examples of word-level labels for different models."
},
"TABREF10": {
"type_str": "table",
"content": "<table><tr><td>provement of 0.18 in AUC score, 0.16 in AP score,</td><td>Top-K score over the average of all baselines.</td></tr><tr><td>and 0.153 in Recall at Top-K score over the average</td><td/></tr><tr><td>of all baselines. Similarly, for the source language,</td><td/></tr><tr><td>it achieves an improvement of 0.14 in the AUC</td><td/></tr><tr><td>score, 0.04 in AP score, and 0.101 in Recall at</td><td/></tr></table>",
"html": null,
"num": null,
"text": "Word level results for all language pairs on the test set in terms of AUC, AP and Recall at Top-K. The baseline scores are taken from the leader-board. Best results for each language by any method are marked in bold."
},
"TABREF12": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Results of sentence-level submissions and their performance on the test set. The baseline scores are taken from the leaderboard. Best results for each language by any method are marked in bold."
}
}
}
}