ACL-OCL / Base_JSON /prefixW /json /wmt /2020.wmt-1.122.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:41:22.786212Z"
},
"title": "TransQuest at WMT2020: Sentence-Level Direct Assessment",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Or\u0203san",
"suffix": "",
"affiliation": {
"laboratory": "Centre for Translation Studies",
"institution": "University of Surrey",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the team TransQuest's participation in Sentence-Level Direct Assessment shared task in WMT 2020. We introduce a simple QE framework based on cross-lingual transformers, and we use it to implement and evaluate two different neural architectures. The proposed methods achieve state-of-the-art results surpassing the results obtained by OpenKiwi, the baseline used in the shared task. We further fine tune the QE framework by performing ensemble and data augmentation. Our approach is the winning solution in all of the language pairs according to the WMT 2020 official results.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the team TransQuest's participation in Sentence-Level Direct Assessment shared task in WMT 2020. We introduce a simple QE framework based on cross-lingual transformers, and we use it to implement and evaluate two different neural architectures. The proposed methods achieve state-of-the-art results surpassing the results obtained by OpenKiwi, the baseline used in the shared task. We further fine tune the QE framework by performing ensemble and data augmentation. Our approach is the winning solution in all of the language pairs according to the WMT 2020 official results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The goal of quality estimation (QE) systems is to determine the quality of a translation without having access to a reference translation. This makes it very useful in translation workflows where it can be used to determine whether an automatically translated sentence is good enough to be used for a given purpose, or if it needs to be shown to a human translator for translation from scratch or postediting (Kepler et al., 2019) . Quality estimation can be done at different levels: document level, sentence level and word level (Ive et al., 2018) . This paper presents TransQuest, a sentence-level quality estimation framework which is the winning solution in all the language pairs in the WMT 2020 Sentence-Level Direct Assessment shared task .",
"cite_spans": [
{
"start": 409,
"end": 430,
"text": "(Kepler et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 531,
"end": 549,
"text": "(Ive et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the past, high preforming quality estimation systems such as QuEst (Specia et al., 2013) and QuEst++ (Specia et al., 2015) were heavily dependent on linguistic processing and feature engineering.",
"cite_spans": [
{
"start": 70,
"end": 91,
"text": "(Specia et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 104,
"end": 125,
"text": "(Specia et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These features were fed into traditional machine-learning algorithms like support vector regression and randomised decision trees (Specia et al., 2013) , which then determined the quality of a translation. Even though, these approaches provide good results, they are no longer the state of the art, being replaced in recent years by neural-based QE systems which usually rely on little or no linguistic processing. For example the best-performing system at the WMT 2017 shared task on QE was POSTECH, which is purely neural and does not rely on feature engineering at all (Kim et al., 2017) .",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "(Specia et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 572,
"end": 590,
"text": "(Kim et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to achieve high results, approaches such as POSTECH require extensive pre-training, which means they depend on large parallel data and are computationally intensive (Ive et al., 2018) . TransQuest, our QE framework removes this dependency on large parallel data by using crosslingual embeddings (Ranasinghe et al., 2020) that are already fine-tuned to reflect properties between languages (Ruder et al., 2019) . Ranasinghe et al. (2020) show that by using them, TransQuest eases the burden of having complex neural network architectures, which in turn entails a reduction of the computational resources. That paper also shows that TransQuest performs well in transfer learning settings where it can be trained on language pairs for which we have resources and applied successfully on less resourced language pairs.",
"cite_spans": [
{
"start": 174,
"end": 192,
"text": "(Ive et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 304,
"end": 329,
"text": "(Ranasinghe et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 398,
"end": 418,
"text": "(Ruder et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 421,
"end": 445,
"text": "Ranasinghe et al. (2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is structured as follows. The dataset used in the competition is briefly discussed in Section 2. In Section 3 we present the TransQuest framework and the methodology employed to train it. This is followed by the evaluation results and their discussion in Section 4. The paper finishes with conclusions and ideas for future research directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The dataset for the Sentence-Level Direct Assessment shared task is composed of data extracted from Wikipedia for six language pairs, consisting of high-resource languages English-German (En-De) and English-Chinese (En-Zh), medium-resource languages Romanian-English (Ro-En) and Estonian-English (Et-En), and lowresource languages Sinhala-English (Si-En) and Nepalese-English (Ne-En), as well as a a Russian-English (Ru-En) dataset which combines articles from Wikipedia and Reddit . Each language pair has 7,000 sentence pairs in the training set, 1,000 sentence pairs in the development set and another 1,000 sentence pairs in the testing set. Each translation was rated with a score between 0 and 100 according to the perceived translation quality by at least three translators . The DA scores were standardised using the z-score. The quality estimation systems have to predict the mean DA z-scores of the test sentence pairs .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2"
},
{
"text": "This section presents the methodology used to develop our quality estimation methods. Our methodology is based on TransQuest our recently introduced QE framework (Ranasinghe et al., 2020) . We first briefly describe the neural network architectures TransQuest proposed, followed by the training details. More details about the framework can be found in (Ranasinghe et al., 2020) .",
"cite_spans": [
{
"start": 162,
"end": 187,
"text": "(Ranasinghe et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 353,
"end": 378,
"text": "(Ranasinghe et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "The TransQuest framework that is used to implement the two architectures described here relies on the XLM-R transformer model (Conneau et al., 2020) to derive the representations of the input sentences (Ranasinghe et al., 2020) . The XLM-R transformer model takes a sequence of no more than 512 tokens as input, and outputs the representation of the sequence. The first token of the sequence is always [CLS] which contains the special embedding to represent the whole sequence, followed by embeddings acquired for each word in the sequence. As shown below, proposed neural network architectures of TransQuest can utilise both the embedding for the [CLS] token and the embeddings generated for each word (Ranasinghe et al., 2020) . The output of the transformer (or transformers for SiameseTransQuest described below), is fed into a simple output layer which is used to estimate the quality of translation. The way the XLM-R transformer is used and the output layer are different in the two instantiations of the framework. We describe each of them below. The fact that TransQuest does not rely on a complex output layer makes training its architectures much less computationally intensive than alternative solutions. The TransQuest framework is opensource, which means researchers can easily propose alternative architectures to the ones TransQuest presents (Ranasinghe et al., 2020) .",
"cite_spans": [
{
"start": 126,
"end": 148,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 202,
"end": 227,
"text": "(Ranasinghe et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 648,
"end": 653,
"text": "[CLS]",
"ref_id": null
},
{
"start": 703,
"end": 728,
"text": "(Ranasinghe et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 1358,
"end": 1383,
"text": "(Ranasinghe et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Architectures",
"sec_num": "3.1"
},
{
"text": "Both neural network architectures presented below use the pre-trained XLM-R models released by HuggingFace's model repository (Wolf et al., 2019) . There are two versions of the pre-trained XLM-R models named XLM-R-base and XLM-R-large. Both of these XLM-R models cover 104 languages (Conneau et al., 2020) , potentially making it very useful to estimate the translation quality for a large number of language pairs.",
"cite_spans": [
{
"start": 126,
"end": 145,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 284,
"end": 306,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Architectures",
"sec_num": "3.1"
},
{
"text": "TransQuest implements two different neural network architectures (Ranasinghe et al., 2020) to perform sentence-level translation quality estimation as described below. The architectures are presented in Figure 1 .",
"cite_spans": [
{
"start": 65,
"end": 90,
"text": "(Ranasinghe et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 203,
"end": 211,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Neural Network Architectures",
"sec_num": "3.1"
},
{
"text": "The first architecture proposed uses a single XLM-R transformer model and is shown in Figure 1a . The input of this model is a concatenation of the original sentence and its translation, separated by the [SEP] token. TransQuest proposes three pooling strategies for the output of the transformer model: using the output of the [CLS] token (CLS-strategy); computing the mean of all output vectors of the input words (MEANstrategy); and computing a max-over-time of the output vectors of the input words (MAXstrategy) (Ranasinghe et al., 2020) . The output of the pooling strategy is used as the input of a softmax layer that predicts the quality score of the translation. TransQuest used mean-squared-error loss as the objective function (Ranasinghe et al., 2020) . Similar to Ranasinghe et al. (2020) , the early experiments we carried out demonstrated that the CLS-strategy leads to better results than the other two strategies for this architecture. Therefore, we used the embedding of the [CLS] token as the input of a softmax layer.",
"cite_spans": [
{
"start": 204,
"end": 209,
"text": "[SEP]",
"ref_id": null
},
{
"start": 516,
"end": 541,
"text": "(Ranasinghe et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 737,
"end": 762,
"text": "(Ranasinghe et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 776,
"end": 800,
"text": "Ranasinghe et al. (2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 86,
"end": 95,
"text": "Figure 1a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "MonoTransQuest (MTransQuest):",
"sec_num": "1."
},
{
"text": "2. SiameseTransQuest (STransQuest): The second approach proposed in TransQuest relies on the Siamese architecture depicted in Figure 1b which has shown promising results in monolingual semantic textual similarity tasks (Reimers and Gurevych, 2019; Ranasinghe et al., 2019) . For this, we fed the original text and the translation into two separate XLM-R transformer models. Similarly to the previous architecture, we experimented with the same three pooling strategies for the outputs of the transformer models (Ranasinghe et al., 2020) . TransQuest then calculates the cosine similarity between the two outputs of the pooling strategy. TransQuest used mean-squared-error loss as the objective function. Similar to Ranasinghe et al. (2020) in the initial experiments we carried out with this architecture the MEAN-strategy showed better results than the other two strategies. For this reason, we used the MEAN-strategy for our experiments. Therefore, cosine similarity is calculated between the mean of all output vectors of the input words produced by each transformer.",
"cite_spans": [
{
"start": 219,
"end": 247,
"text": "(Reimers and Gurevych, 2019;",
"ref_id": "BIBREF9"
},
{
"start": 248,
"end": 272,
"text": "Ranasinghe et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 511,
"end": 536,
"text": "(Ranasinghe et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 715,
"end": 739,
"text": "Ranasinghe et al. (2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 126,
"end": 135,
"text": "Figure 1b",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "MonoTransQuest (MTransQuest):",
"sec_num": "1."
},
{
"text": "We used the same set of configurations suggested in Ranasinghe et al. (2020) for all the language pairs evaluated in this paper in order to ensure consistency between all the languages. This also provides a good starting configuration for researchers who intend to use TransQuest on a new language pair. In both architectures, we used a batch-size of eight, Adam optimiser with learning rate 2e\u22125, and a linear learning rate warm-up over 10% of the training data. The models were trained using only training data. Furthermore, they were evaluated while training using an evaluation set that had one fifth of the rows in training data. We performed early stopping if the evaluation loss did not improve over ten evaluation rounds. All of the models were trained for three epochs. For some of the experiments, we used an Nvidia Tesla K80 GPU, whilst for others we used an Nvidia Tesla T4 GPU. This was purely based on the availability of the hardware and it was not a methodological decision.",
"cite_spans": [
{
"start": 52,
"end": 76,
"text": "Ranasinghe et al. (2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "3.2"
},
{
"text": "The TransQuest framework was implemented using Python 3.7 and PyTorch 1.5.0. To integrate the functionalities of the transformers we used the version 3.0.0 of the HuggingFace's Transformers library. The implemented framework is available on GitHub 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "3.3"
},
{
"text": "This section presents the evaluation results of our architectures and the fine tuning strategies that can be used to improve the results. We first evaluate the TransQuest framework with the default setting (Section 4.1). Next we evaluate an ensemble setting of TransQuest in Section 4.2. We finally assess the performance of TransQuest with augmented data. We conclude the section with a discussion of the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation, Results and Discussion",
"sec_num": "4"
},
{
"text": "The evaluation metric used was the Pearson correlation (r) between the predictions and the gold standard from the test set, which is the most commonly used evaluation metric in WMT quality estimation shared tasks Fonseca et al., 2019) . We report the Pearson correlation values that we obtained from CodaLab, the hosting platform of the WMT 2020 QE shared task. As a baseline we compare our results with the performance of OpenKiwi as reported by the task organisers .",
"cite_spans": [
{
"start": 213,
"end": 234,
"text": "Fonseca et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation, Results and Discussion",
"sec_num": "4"
},
{
"text": "The first evaluation we carried out was for the default configurations of the TransQuest framework where we used the training set of each language to build a quality estimation model using XLM-Rlarge transformer model and we evaluated it on a test set from the same language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TransQuest with Default settings",
"sec_num": "4.1"
},
{
"text": "The results for each language with default settings are shown in row I of Table 1. The results indicate that both architectures proposed in TransQuest outperform the baseline, OpenKiwi, in all the language pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TransQuest with Default settings",
"sec_num": "4.1"
},
{
"text": "From the two architectures, MTransQuest performs slightly better than STransQuest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TransQuest with Default settings",
"sec_num": "4.1"
},
{
"text": "As shown in Table 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "TransQuest with Default settings",
"sec_num": "4.1"
},
{
"text": "Transformers have been proven to provide better results when experimented with ensemble techniques (Xu et al., 2020) . In order to improve the results of TransQuest we too followed an ensemble approach which consisted of two steps. We conducted these steps for both architectures in TransQuest.",
"cite_spans": [
{
"start": 99,
"end": 116,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TransQuest with Ensemble",
"sec_num": "4.2"
},
{
"text": "1. We train TransQuest using the pre-trained XLM-R-base transformer model instead of the XLM-R-large transformer model in the TransQuest default setting. We report the results from the two architectures from this step in row II of Table 1 as MTransQuest-base and STransQuest-base.",
"cite_spans": [],
"ref_spans": [
{
"start": 231,
"end": 238,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "TransQuest with Ensemble",
"sec_num": "4.2"
},
{
"text": "2. We perform a weighted average ensemble for the output of the default setting and the output we obtained from step 1. We experimented on weights 0.8:0.2, 0.6:0.4, 0.5:0.5 on the output of the default setting and output from the step 1 respectively. Since the results we got from XLM-R-base transformer model are slightly worse than the results we got from default setting we did not consider the weight combinations that gives higher weight to XLM-R-base transformer model results. We obtained best results when we used the weights 0.8:0.2. We report the results from the two architectures from this step in row III of Table 1 as MTransQuest \u2297 and STransQuest \u2297.",
"cite_spans": [],
"ref_spans": [
{
"start": 621,
"end": 628,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "TransQuest with Ensemble",
"sec_num": "4.2"
},
{
"text": "As shown in Table 1 both architectures in TransQuest with ensemble setting gained \u2248 0.01-0.02 Pearson correlation boost over the default settings for all the language pairs.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "TransQuest with Ensemble",
"sec_num": "4.2"
},
{
"text": "All of the languages had 7,000 training instances that we used in the above mentioned settings in TransQuest. To experiment how TransQuest performs with more data, we trained TransQuest on a data augmented setting. Alongside the training, development and testing datasets, the shared task organisers also provided the parallel sentences which were used to train the neural machine translation system in each language. In the data augmentation setting, we added the sentence pairs from that neural machine translation system training file to training dataset we used to train TransQuest. In order to find the best setting for the data augmentation we experimented with adding 1000, 2000, 3000, up to 5000 sentence pairs randomly. Since the ensemble setting performed better than the default setting of TransQuest, we conducted this data augmentation experiment on the ensemble setting. We assumed that the sentence pairs added from the neural machine translation system training file have maximum translation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TransQuest with Data Augmentation",
"sec_num": "4.3"
},
{
"text": "Up to 2000 sentence pairs the results continued to get better. However, adding more than 2000 sentence pairs did not improve the results. We did not experiment with adding any further than 5000 sentence pairs to the training set since the timeline of the competition was tight. We were also aware that adding more sentence pairs with the maximum translation quality to the training file will make it imbalance and affect the performance of the machine learning models negatively. We report the results from the two architectures from this step in row IV of Table 1 as MTransQuest \u2297-Aug and STransQuest \u2297-Aug.",
"cite_spans": [],
"ref_spans": [
{
"start": 557,
"end": 564,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "TransQuest with Data Augmentation",
"sec_num": "4.3"
},
{
"text": "This setting provided the best results for both architectures in TransQuest for all of the language pairs. As shown in Table 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "TransQuest with Data Augmentation",
"sec_num": "4.3"
},
{
"text": "In an attempt to better understand the performance and limitations of TransQuest we carried out an error analysis on the results obtained on Romanian -English and Sinhala -English. The choice of language pairs we analysed was determined by the availability of native speakers to perform this analysis. We focused on the cases where the difference between the predicted score and expected score was the greatest. This included both cases where the predicted score was underestimated and overestimated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "4.4"
},
{
"text": "Analysis of the results does not reveal very clear patterns. The largest number of errors seem to be caused by the presence of named entities in the source sentences. In some cases these entities are mishandled during the translation. The resulting sentences are usually syntactically correct, but semantically odd. Typical examples are RO:\u00cen urm\u0203 explor\u0203rilor C\u0203pitanului James Cook, Australia s , i Noua Zeeland\u0203 au devenit t , inte ale colonialismului britanic. (As a result of Captain James Cook's explorations, Australia and New Zealand have become the targets of British colonialism.) -EN: Captain James Cook, Australia and New Zealand have finally become the targets of British colonialism. (expected: -1.2360, predicted: 0.2560) and RO: O alt\u0203 problem\u0203 important\u0203 cu care trupele Antantei au fost obligate s\u0203 se confrunte a fost malaria. (Another important problem that the Triple Entente troops had to face was malaria.) -EN: Another important problem that Antarctic troops had to face was malaria. (expected: 0.2813, predicted: -0.9050). In the opinion of the authors of this paper, it is debatable whether the expected scores for these two pairs should be so different. Both of them have obvious problems and cannot be clearly understood without reading the source. For this reason, we would expect that both of them have low scores. Instances like this also occur in the training data. As a result of this, it may be that TransQuest learns contradictory information, which in turn leads to errors at the testing stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "4.4"
},
{
"text": "A large number of problems are caused by incomplete source sentences or input sentences with noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "4.4"
},
{
"text": "For example the pair RO: thumbright250pxDrapelul cu f\u00e2s , iile\u00een pozit , ie vertical\u0203 (The flag with strips in upright position) -EN: ghtghtness 250pxDrapel with strips in upright position has an expected score of 0.0595, but our method predicts -0.9786. Given that only ghtghtness 250pxDrapel is wrong in the translation, the predicted score is far too low. In an attempt to see how much this noise influences the result, we run the system with the pair RO: Drapelul cu f\u00e2s , iil\u00ea \u0131n pozit , ie vertical\u0203 -EN: Drapel with strips in upright position. The prediction is 0.42132, which is more in line with our expectations given that one of the words is not translated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "4.4"
},
{
"text": "Similar to Ro-En, in Si-En the majority of problems seem to be caused by the presence of named entities in the source sentences. For an example in the English translation: But the disguised Shiv will help them securely establish the statue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "4.4"
},
{
"text": "(expected: 1.3618, predicted: -0.008), the correct English translation would be But the disguised Shividru will help them securely establish the statue.. Only the named entity Shividru is translated incorrectly, therefore the annotators have annotated the translation with a high quality. However TransQuest fails to identify that. Similar scenarios can be found in English translations Kamala Devi Chattopadhyay spoke at this meeting, Dr. Ann. (expected:1.3177, predicted:-0.2999) and The Warrior Falls are stone's, halting, heraldry and stonework rather than cottages. The cathedral manor is navigable places (expected:0.1677, predicted:-0.7587). It is clear that the presence of the named entities seem to confuse the algorithm we used, hence it needs to handle named entities in a proper way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "4.4"
},
{
"text": "In this paper we evaluated different settings of TransQuest in sentence-level direct quality assessment. We showed that ensemble results with XLM-R-base and XLM-R-large with data augmentation techniques can improve the performance of TransQuest framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The official results of the competition show that TransQuest won the first place in all the language pairs in Sentence-Level Direct Assessment task. TransQuest is the sole winner in En-Zh, Ne-En and Ru-En language pairs and the multilingual track. For the other language pairs (En-De, Ro-En, Et-En and Si-En) it shares the first place with another system, whose results are not statistically different from ours. The full results of the shared task can be seen in .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In the future, we plan to experiment more with the data augmentation settings. We are interested in augmenting the training file with semantically similar sentences to the test set rather than augmenting with random sentence pairs as we did in this paper. As shown in the error analysis in Section 4.4 the future releases of the framework need to handle named entities properly. We also hope to implement TransQuest in document level quality estimation too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "TransQuest GitHub repositoryhttps://github. com/tharindudr/transQuest",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised quality estimation for neural machine translation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Aletras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chaudhary",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Fomicheva, L Specia, S Sun, L Yankovskaya, F Blain, F Guzm\u00e1n, M Fishel, N Aletras, and V Chaudhary. 2020. Unsupervised quality estimation for neural machine translation. Transactions of the Association for Computational Linguistics, Vol 8 (2020).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Findings of the WMT 2019 shared tasks on quality estimation",
"authors": [
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Federmann",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5401"
]
},
"num": null,
"urls": [],
"raw_text": "Erick Fonseca, Lisa Yankovskaya, Andr\u00e9 F. T. Martins, Mark Fishel, and Christian Federmann. 2019. Findings of the WMT 2019 shared tasks on quality estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 1-10, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "deepQuest: A framework for neural-based quality estimation",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Ive",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3146--3157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Ive, Fr\u00e9d\u00e9ric Blain, and Lucia Specia. 2018. deepQuest: A framework for neural-based quality estimation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3146-3157, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "OpenKiwi: An open source framework for quality estimation",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Kepler",
"suffix": ""
},
{
"first": "Jonay",
"middle": [],
"last": "Tr\u00e9nous",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vera",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "117--122",
"other_ids": {
"DOI": [
"10.18653/v1/P19-3020"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Kepler, Jonay Tr\u00e9nous, Marcos Treviso, Miguel Vera, and Andr\u00e9 F. T. Martins. 2019. OpenKiwi: An open source framework for quality estimation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 117-122, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation",
"authors": [
{
"first": "Hyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jong-Hyeok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Seung-Hoon",
"middle": [],
"last": "Na",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "562--568",
"other_ids": {
"DOI": [
"10.18653/v1/W17-4763"
]
},
"num": null,
"urls": [],
"raw_text": "Hyun Kim, Jong-Hyeok Lee, and Seung-Hoon Na. 2017. Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In Proceedings of the Second Conference on Machine Translation, pages 562-568, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semantic textual similarity with",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.26615/978-954-452-056-4_116"
]
},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2019. Semantic textual similarity with",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Siamese neural networks",
"authors": [],
"year": null,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1004--1011",
"other_ids": {
"DOI": [
"10.26615/978-954-452-056-4_116"
]
},
"num": null,
"urls": [],
"raw_text": "Siamese neural networks. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 1004-1011, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Transquest: Translation quality estimation with cross-lingual transformers",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020. Transquest: Translation quality estimation with cross-lingual transformers. In Proceedings of the 28th International Conference on Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A survey of cross-lingual word embedding models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2019,
"venue": "J. Artif. Int. Res",
"volume": "65",
"issue": "1",
"pages": "569--630",
"other_ids": {
"DOI": [
"10.1613/jair.1.11640"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2019. A survey of cross-lingual word embedding models. J. Artif. Int. Res., 65(1):569-630.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Findings of the wmt 2020 shared task on quality estimation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Andr\u00e9 Ft",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Marina Fomicheva, Erick Fonseca, Vishrav Chaudhary, Francisco Guzm\u00e1n, and Andr\u00e9 FT Martins. 2020. Findings of the wmt 2020 shared task on quality estimation. In Proceedings of the Fifth Conference on Machine Translation: Shared Task Papers.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Findings of the WMT 2018 shared task on quality estimation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Ram\u00f3n",
"middle": [],
"last": "Astudillo",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "689--709",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6451"
]
},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Varvara Logacheva, Ram\u00f3n Astudillo, and Andr\u00e9 F. T. Martins. 2018. Findings of the WMT 2018 shared task on quality estimation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 689-709, Belgium, Brussels. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multi-level translation quality prediction with QuEst++",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Paetzold",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/v1/P15-4020"
]
},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Gustavo Paetzold, and Carolina Scarton. 2015. Multi-level translation quality prediction with QuEst++. In Proceedings of ACL-IJCNLP 2015",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Association for Computational Linguistics and The Asian Federation of Natural Language Processing",
"authors": [],
"year": null,
"venue": "System Demonstrations",
"volume": "",
"issue": "",
"pages": "115--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "System Demonstrations, pages 115-120, Beijing, China. Association for Computational Linguistics and The Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "QuEst -a translation quality estimation framework",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Kashif",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "G",
"middle": [
"C"
],
"last": "Jose",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "De Souza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "79--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Kashif Shah, Jose G.C. de Souza, and Trevor Cohn. 2013. QuEst -a translation quality estimation framework. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 79-84, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Improving bert fine-tuning via self-ensemble and self-distillation",
"authors": [
{
"first": "Yige",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Ligao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.10345"
]
},
"num": null,
"urls": [],
"raw_text": "Yige Xu, Xipeng Qiu, Ligao Zhou, and Xuanjing Huang. 2020. Improving bert fine-tuning via self-ensemble and self-distillation. arXiv preprint arXiv:2002.10345.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "MTransQuest gained \u2248 0.2-0.3 Pearson correlation boost over OpenKiwi in all the language pairs. Additionally, MTransQuest achieves \u2248 0.4 Pearson correlation boost over OpenKiwi in the low-resource language pair Ne-En. (a) MTransQuest architecture (b) STransQuest Architecture Two architectures of the TransQuest framework.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "both architectures in TransQuest with the data augmentation setting gained \u2248 0.01-0.09 Pearson correlation boost over the default settings for all the language pairs. Additionally, MTransQuest \u2297-Aug achieves \u2248 0.09 Pearson correlation boost over default MTransQuest in the high-resource language pair En-De.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"text": "Pearson (r) correlation between TransQuest algorithm predictions and human DA judgments. Best results for each language (any method) are marked in bold. Rows I, II, III and IV indicate the different settings of TransQuest, explained in Sections 4.1-4.3. OpenKiwi baseline results are in Row V.",
"html": null,
"num": null
}
}
}
}