|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T02:13:09.711503Z" |
|
}, |
|
"title": "An Exploratory Study on Multilingual Quality Estimation", |
|
"authors": [ |
|
{ |
|
"first": "Shuo", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Marina", |
|
"middle": [], |
|
"last": "Fomicheva", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Sheffield", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Fr\u00e9d\u00e9ric", |
|
"middle": [], |
|
"last": "Blain", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Sheffield", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "El-Kishky", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Adithya", |
|
"middle": [], |
|
"last": "Renduchintala", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Sheffield", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Predicting the quality of machine translation has traditionally been addressed with language-specific models, under the assumption that the quality label distribution or linguistic features exhibit traits that are not shared across languages. An obvious disadvantage of this approach is the need for labelled data for each given language pair. We challenge this assumption by exploring different approaches to multilingual Quality Estimation (QE), including using scores from translation models. We show that these outperform singlelanguage models, particularly in less balanced quality label distributions and low-resource settings. In the extreme case of zero-shot QE, we show that it is possible to accurately predict quality for any given new language from models trained on other languages. Our findings indicate that state-of-the-art neural QE models based on powerful pre-trained representations generalise well across languages, making them more applicable in real-world settings.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Predicting the quality of machine translation has traditionally been addressed with language-specific models, under the assumption that the quality label distribution or linguistic features exhibit traits that are not shared across languages. An obvious disadvantage of this approach is the need for labelled data for each given language pair. We challenge this assumption by exploring different approaches to multilingual Quality Estimation (QE), including using scores from translation models. We show that these outperform singlelanguage models, particularly in less balanced quality label distributions and low-resource settings. In the extreme case of zero-shot QE, we show that it is possible to accurately predict quality for any given new language from models trained on other languages. Our findings indicate that state-of-the-art neural QE models based on powerful pre-trained representations generalise well across languages, making them more applicable in real-world settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Quality Estimation (QE) (Blatz et al., 2004a; Specia et al., 2009) is the task of predicting the quality of an automatically generated translation at test time, when no reference translation is available for comparison. Instead of reference translations, QE turns to explicit quality indicators that are either provided by the Machine Translation (MT) system itself (the so-called glass-box features) or extracted from both the source and the target texts (the socalled black-box features) (Specia et al., 2018b ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 45, |
|
"text": "(Blatz et al., 2004a;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 46, |
|
"end": 66, |
|
"text": "Specia et al., 2009)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 490, |
|
"end": 511, |
|
"text": "(Specia et al., 2018b", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the current QE approaches, black-box features are learned representations extracted by finetuning pre-trained multilingual or cross-lingual sentence encoders such as BERT (Devlin et al., 2018) , XLM-R (Conneau et al., 2019) or LASER (Artetxe and Schwenk, 2019) . These supervised approaches have led to the state-of-the-art (SOTA) results in this task (Kepler et al., 2019; Fonseca et al., 2019) , similarly to what has been observed for a myriad of other downstream natural language processing applications that rely on cross-lingual sentence similarity. Glass-box features are usually obtained by extracting various types of information from the MT system, e.g. lexical probability or language model probability in the case of statistical MT systems (Blatz et al., 2004b) , or more recently softmax probability and attention weights from neural MT models (Fomicheva et al., 2020) . Glass-box approach is potentially useful for low resource or zeroshot scenarios as it does not require large amounts of labelled data for training, but it does not perform as well as SOTA supervised models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 195, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 226, |
|
"text": "(Conneau et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 236, |
|
"end": 263, |
|
"text": "(Artetxe and Schwenk, 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 355, |
|
"end": 376, |
|
"text": "(Kepler et al., 2019;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 377, |
|
"end": 398, |
|
"text": "Fonseca et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 755, |
|
"end": 776, |
|
"text": "(Blatz et al., 2004b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 860, |
|
"end": 884, |
|
"text": "(Fomicheva et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "QE is therefore generally framed as a supervised machine learning problem, with models trained on data labelled for quality for each language pair. Training data publicly available to build QE models is constrained to very few languages, which has made it difficult to assess how well QE models generalise across languages. Therefore QE work to date has been addressed as a language-specific task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The recent availability of multilingual QE data in a diverse set of language pairs (see Section 4.1) has made it possible to explore the multilingual potential of the QE task and SOTA models. In this paper, we posit that it is possible and beneficial to extend SOTA models to frame QE as a languageindependent task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We further explore the role of in-language supervision in comparison to supervision coming from other languages in a multi-task setting. Finally, we propose for the first time to model QE as a zero-shot cross-lingual transfer task, enabling new avenues of research in which multilingual models can be trained once and then serve a multitude of languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The main contributions of this paper are: (i) we propose new multi-task learning approaches for multilingual QE (Section 3); (ii) we show that multilingual system outperforms single language ones (Section 5.1.1), especially in low-resource and less balanced label distribution settings (Section 5.1.3), and -counter-intuitively -that sharing a source or target language with the test case does not prove beneficial (Section 5.1.2); and (iii) we study black-box and glass-box QE in a multilingual setting and show that zero-shot QE is possible for both (Section 5.1.3 and 5.2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "QE Early QE models were trained upon a set of explicit features expressing either the confidence of the MT system, the complexity of the source sentence, the fluency of the translation in the target language or its adequacy with regard to the source sentence (Specia et al., 2018b) . Current SOTA models are learnt with the use of neural networks (NN) (Specia et al., 2018a; Fonseca et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 259, |
|
"end": 281, |
|
"text": "(Specia et al., 2018b)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 374, |
|
"text": "(Specia et al., 2018a;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 375, |
|
"end": 396, |
|
"text": "Fonseca et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The assumption is that representations learned can, to some extent, account for source complexity, target fluency and source-target adequacy. These are fine-tuned from pre-trained word representations extracted using multilingual or cross-lingual sentence encoders such as BERT (Devlin et al., 2018) , XLM-R (Conneau et al., 2019) or LASER (Artetxe and Schwenk, 2019) . Kim et al. (2017) propose the first breakthrough in neural-based QE with the Predictor-Estimator modular architecture. The Predictor model is an encoder-decoder Recurrent Neural Network (RNN) model trained on a huge amount of parallel data for a word prediction task. Its output is fed to the Estimator, a unidirectional RNN trained on QE data, to produce the quality estimates. Kepler et al. (2019) use a similar architecture where the Predictor model is replaced by pretrained contextualised word representations such as BERT (Devlin et al., 2018) or XLM-R (Conneau et al., 2019) . Despite achieving strong performances, such models are resource heavy and need to be fine-tuned for each language-pair under consideration.", |
|
"cite_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 299, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 330, |
|
"text": "(Conneau et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 340, |
|
"end": 367, |
|
"text": "(Artetxe and Schwenk, 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 387, |
|
"text": "Kim et al. (2017)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 749, |
|
"end": 769, |
|
"text": "Kepler et al. (2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 898, |
|
"end": 919, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 929, |
|
"end": 951, |
|
"text": "(Conneau et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In a very different approach, Fomicheva et al. (2020) propose exploiting information provided by the NMT system itself. By exploring uncertainty quantification methods, they show that the confidence with which the NMT system produces its translation correlates well with its quality. Although not performing as well as SOTA supervised models, their approach has the main advantage to be unsupervised and not rely on labelled data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 53, |
|
"text": "Fomicheva et al. (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Multilinguality Multilinguality allows training a single model to perform a task from and to multiple languages. This principle has been successfully applied to NMT (Dong et al., 2015; Firat et al., 2016b,a; Nguyen and Chiang, 2017) . Aharoni et al. (2019) stretches this approach by translating up to 102 languages from and to English using a Transformer model (Vaswani et al., 2017) . They show that multilingual many-to-many models are effective in low resource settings. Multilinguality also allows for zero-shot translation (Johnson et al., 2017) . With a simple encoder-decoder architecture and without explicit bridging between source and target languages, they show that their model is able to build a form of inter-lingual representation between all involved language pairs. Shah and Specia (2016) is the only work in QE that attempted to explore models for more than one language. They use multitask learning with annotators or languages as multiple tasks. In a traditional black-box feature-based approach with Gaussian Processes as learning algorithm, their results suggest that adequately modelling the additional data is as important as the additional data itself. The multilingual models led to marginal improvements over bilingual ones. In addition, the experiments were only conducted with English translation into two closely related languages (French and Spanish).", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 184, |
|
"text": "(Dong et al., 2015;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 185, |
|
"end": 207, |
|
"text": "Firat et al., 2016b,a;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 232, |
|
"text": "Nguyen and Chiang, 2017)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 256, |
|
"text": "Aharoni et al. (2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 384, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 529, |
|
"end": 551, |
|
"text": "(Johnson et al., 2017)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 784, |
|
"end": 806, |
|
"text": "Shah and Specia (2016)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section, we describe the QE models we propose and experiment with. They build upon pretrained representations and represent the SOTA in QE, as we will show in Section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual QE", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Pre-trained contextualised representations such as BERT (Devlin et al., 2018) and XLM-R (Conneau et al., 2019) are deep contextualised language models based on the transformer neural architecture (Vaswani et al., 2017) . These models are pre-trained on a large amount of texts in multiple languages and optimised with self-supervised loss functions. They use shared subword vocabularies that directly support more than a hundred languages without the need for any language-specific pre-processing. We explore QE models built on top of XLM-R, a pre-trained contextualised language model that achieves SOTA performance on multiple benchmark datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 77, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 88, |
|
"end": 110, |
|
"text": "(Conneau et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 218, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual QE", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Baseline QE model (BASE) Given a source sentence s X in language X and a target sentence s Y in language Y , we model the QE function f by stacking a 2-layer multilayer perceptron (MLP) on the vector representation of the [CLS] token from XLM-R:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual QE", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "f (s X , s Y ) =W 2 \u2022 ReLU ( W 1 \u2022 E cls (s X , s Y ) + b 1 ) + b 2 (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual QE", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual QE", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "W 2 \u2208 R 1\u00d74096 , b 2 \u2208 R, W 1 \u2208 R 4096\u00d71024 and b 1 \u2208 R 4096", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual QE", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": ". E cls is a function that extracts the vector representation of the [CLS] token after encoding the concatenation of s X and s Y with XLM-R and ReLU is the Rectified Linear Unit activation function. We explore two training strategies: The bilingual (BL) strategy trains a QE model for every language pair while the multilingual (ML) strategy trains a single multilingual QE model for all language pairs, where the training data is simply pooled together without any language identifier. We note that this multilingual model here corresponds to a pooled, single-task learning approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual QE", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Multi-task Learning QE Model (MTL) Multitask learning has shown promising results in different NLP tasks (Ruder, 2017) . Here, we want to explore whether having parameter sharing across languages is beneficial, and to what extent having language-specific predictors can boost performance. Therefore, we experiment with a simple multi-task approach where we concurrently optimise multiple QE BASE models that use a language-specific (LS) training strategy. To allow for testing in zero-shot conditions, we also train a language-agnostic (LA) component, which receives sampled data from every language. We refer to these two models as MTL-LA and MTL-LS. As seen in Figure 2 , the MTL-LS submodels and MTL-LA submodel share a common XLM-R encoder, while each submodel has its own dedicated language-specific MLP. The intuition of this approach is that it can result in improved learning efficiency and prediction accuracy by exploiting the similarities and differences in the QE tasks for different language directions (Thrun, 1996; Baxter, 2000) . At training time, we iterate through the MTL-LS submodels in a round-robin fashion and alternate between training the MTL-LA submodel and training the chosen MTL-LS submodel. At test time, we can evaluate a test set with either the MTL-LA submodel or the MTL-LS submodel trained on the same language pair as the test set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 118, |
|
"text": "(Ruder, 2017)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1016, |
|
"end": 1029, |
|
"text": "(Thrun, 1996;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1030, |
|
"end": 1043, |
|
"text": "Baxter, 2000)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 663, |
|
"end": 671, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multilingual QE", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "4 Experimental Setup", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual QE", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We use the official data from the WMT 2020 QE Shared Task 1 1 . This dataset contains sentences extracted from Wikipedia (Fomicheva et al., 2020) and Reddit for Ru-En, translated to and from English for a total of 7 language pairs. The language pairs are divided into 3 categories: the high-resource English-German (En-De), English-Chinese (En-Zh) and Russian-English (Ru-En) pairs; the medium-resource Romanian-English (Ro-En) and Estonian-English (Et-En) pairs; and the low-resource Sinhala-English (Si-En) and Nepali-English (Ne-En) pairs. Each translation was produced with SOTA transformer-based NMT models and manually annotated for quality using an annotation scheme inspired by the Direct Assessment (DA) methodology proposed by Graham et al. (2013) . Specifically, translations were annotated on a 0-100 scale, where the 0-10 range represents an incorrect translation; 11-29, a translation with few correct keywords, but the overall meaning is different from the source; 30-50, a translation with major mistakes; 51-69, a translation which is understandable and conveys the overall meaning of the source but contains typos or grammatical errors; 70-90, a translation that closely preserves the semantics of the source sentence; and 90-100, a perfect translation. Figure 3 shows the distribution of DA scores for the different language pairs. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 145, |
|
"text": "(Fomicheva et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 737, |
|
"end": 757, |
|
"text": "Graham et al. (2013)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1272, |
|
"end": 1280, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "QE Dataset", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We train and test our models in the following conditions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Settings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Data splits we use the training and development sets provided for the WMT2020 shared task on QE. 2 Since the test set is not publicly available, we further split the 7,000-instance training set for each language pair by using the first 6,000 instances for training and the last 1,000 instances for development, and report results on the official (1,000) development set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Settings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Training details We optimise our models with Adam (Kingma and Ba, 2015) and use the same learning rate (1e \u22126 ) for all experiments. We use a batch size of 8 and train on Nvidia V100 GPUs for 20 epochs. Each model is trained 5 times with different random seeds.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Settings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Evaluation All results in this paper are in terms of the average Pearson's correlation for predicted QE scores against gold QE scores over the 5 different runs. Pearson correlation is the standard metric for this task, but we also compute error using Root Mean Squared Error (RMSE) (see Appendix).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Settings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In what follows, we pose and discuss various hypotheses on multilinguality for QE. First we focus on our black-box approach from Section 3 (Section 5.1). Second, we examine the behavior of a glassbox approach which does not directly model the source and target texts in multilingual settings (Section 5.2). In all cases, we define TrainL as the set of language pairs used for training the QE model, and TestL as the set of language pairs used at test time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "5.1 Black-box QE Approach", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As we can see from the results in Table 1 : Results for BASE and MTL QE models. We train different BASE-BL models for every language pair and a single BASE-ML model on all language pairs. We also train a single MTL QE model consists of multiple MTL-LS and MTL-LA submodels. For each TestL, we evaluate it with the MTL-LS submodel trained on the same language pair. We bold the best results across all models. Significant improvements over BASE BL are marked with *. \u2021 identifies systems trained on the full 7,000-instance training set with performances reported on the official test set of the WMT'20 QE Shared Task 1 (https://competitions.codalab.org/competitions/24447), which we assume to come from the same distribution as the dev set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 41, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multilingual models are better than bilingual models", |
|
"sec_num": "5.1.1" |
|
}, |
|
{ |
|
"text": "a BASE-BL model trained on En-Zh and tested on En-De performs at average Pearson's correlation of 0.37, which is only 0.02 below the best result. We hypothesize that XLM-R might be capturing certain traits in TrainL that can generalise well to other TestL, i.e. the complexity of source sentences or the fluency of the target sentences (Sun et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 354, |
|
"text": "(Sun et al., 2020)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual models are better than bilingual models", |
|
"sec_num": "5.1.1" |
|
}, |
|
{ |
|
"text": "Here we investigate whether having specialised language-specific sub-models which can benefit from the shared supervision from other languages while keeping their focus on a language-specific task can help to improve performance. Furthermore, it is possible that multi-task learning works better when language pairs share certain characteristics. Therefore, we also investigate whether combining language pairs that share either source or target languages can be more beneficial. For that, we use the MTL models but with a reduced set of languages. From the results in Table 1 , we observe that language-specialised predictors do not help improve performance. There is no clear advantage in using the multi-task learning QE approach (MTL-LS and MTL-LA) where each language pair is treated as a separate task; over the simple singletask multi-lingual learning approach (BASE-ML), despite the former having more parameters and language-specific MLP layers.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 569, |
|
"end": 576, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "There is little benefit from specialisation", |
|
"sec_num": "5.1.2" |
|
}, |
|
{ |
|
"text": "In the table, we compare MTL models trained on language pairs that share the source language (En-*) or the target language (*-En) against MTL models trained on all languages (All). As we can see from the results, the MTL model trained on En-* perform worse than the MTL model trained on all language pairs. In contrast, the MTL model trained on *-En performs a little bit better than the MTL model trained on all language pairs on 4 out of the 5 language pairs and is comparable to Base-ML on those language directions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "There is little benefit from specialisation", |
|
"sec_num": "5.1.2" |
|
}, |
|
{ |
|
"text": "To test whether a multilingual model for QE can generalise beyond the language pairs observed during training, we also conduct experiments varying amounts of in-language data (i.e. 0% -zeroshot, 5%, 10%, 25%, 50%, 75% and 100%). We build and compare BASE-BL and BASE-ML models. We train BASE-BL models only on the sub- Table 2 , the multilingual model performs better than the bilingual models on all language pairs for every configuration of training data. Moreover, in 3 out of 7 cases, the zero-shot models perform better than the fully-trained bilingual models. This provides strong evidence that the QE task can be solved in a multilingual way, without loss of performance compared to bilingual performance. It also shows strong evidence for the zero-shot applicability of our models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 319, |
|
"end": 326, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multilingual models help zero-and few-shot QE", |
|
"sec_num": "5.1.3" |
|
}, |
|
{ |
|
"text": "Having pre-trained representations can help build state-of-the-art multilingual systems. However, these representations are costly to compute in practice, which limits their applicability for building QE systems for real-time scenarios. Glass-box approaches to QE extract information from the NMT system itself to predict quality, without directly relying on the source and target text or using any external resources. To test how well this information can generalise across different languages, we lever- age existing work on glass-box QE by Fomicheva et al. (2020) that explores NMT output distribution to capture predictive uncertainty as a proxy for MT quality. We use the following 5 best-performing glass-box indicators from their work:", |
|
"cite_spans": [ |
|
{ |
|
"start": 543, |
|
"end": 566, |
|
"text": "Fomicheva et al. (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Glass-box QE Approach", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 Average NMT log-probability of the translated sentence;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Glass-box QE Approach", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 Variance of word-level log-probabilities;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Glass-box QE Approach", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 Entropy of NMT softmax output distribution;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Glass-box QE Approach", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 NMT log-probability of translations generated with Monte Carlo dropout (Gal and Ghahramani, 2016 \u2022 Lexical similarity between MT hypotheses generated with Monte Carlo dropout.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 98, |
|
"text": "(Gal and Ghahramani, 2016", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Glass-box QE Approach", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We train an XGboost regression model (Chen and Guestrin, 2016) 5 to combine these features to predict DA judgments and test the performance of the model in multilingual settings. Table 3 shows Pearson correlation for the regression models trained on each language pair and evaluated either on the same language pair or other language pairs. 6 The 'All langs' row indicates the results when training on all language pairs, whereas 'Best feature' indicates the correlation obtained by the best performing feature individually. Comparing these results to the results for pre-trained representations in Table 1 we can make three observations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 186, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 599, |
|
"end": 606, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Glass-box QE Approach", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "First, although the correlation is generally lower for the glass-box approach, performance degradation when testing on different language pairs is smaller. For all language pairs except English-German, we observe a relatively small decrease in performance (up to 0.09) when training and test language pairs are different. This suggests that the indicators extracted from the NMT model are more passes through the network, collecting posterior probabilities generated by the model with parameters perturbed by dropout and using the resulting distribution to approximate model uncertainty. 5 We chose a regression model over an NN given the smaller number of features available.", |
|
"cite_spans": [ |
|
{ |
|
"start": 588, |
|
"end": 589, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Glass-box features are more comparable across languages", |
|
"sec_num": "5.2.1" |
|
}, |
|
{ |
|
"text": "6 These experiments do not include Russian-English, as the corresponding NMT system is an ensemble and it is not evident how the glass-box features proposed by Fomicheva et al. (2020) should be extracted in this case. comparable across languages than input features from pre-trained representations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 183, |
|
"text": "Fomicheva et al. (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Glass-box features are more comparable across languages", |
|
"sec_num": "5.2.1" |
|
}, |
|
{ |
|
"text": "We note that the NMT systems in MLQE dataset were all based on Transformer architecture but trained using different amount of data and have different overall output quality. Interestingly, the results of this experiment indicate that glass-box information extracted from these systems could be language-independent. More experiments are needed to confirm if this observation can be extrapolated to other datasets, language pairs, domains and MT systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Glass-box features are more comparable across languages", |
|
"sec_num": "5.2.1" |
|
}, |
|
{ |
|
"text": "Second, by contrast to the results in Table 1 where multilingual training brings significant improvements, we do not see any gains in performance from training with all available data. The reason could be that training a regression model with a small number of features does not require large amounts of training data, and therefore performance does not improve with additional data. English-German is an exception with a large gain in correlation when training on all language pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 45, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multilingual gains are limited by learning algorithm", |
|
"sec_num": "5.2.2" |
|
}, |
|
{ |
|
"text": "Finally, similarly to the black-box approach in Table 1, the performance for English-German benefits from using the data from other language pairs for training. This indicates that the results are affected by factors that are independent of the approach used for prediction. To better understand these results we look at the distribution of NMT logprobabilities ( Figure 5 ) and the distribution of DA scores (Figure 3 ). While log-probability distributions are comparable across language pairs, the distributions of DA scores are very different. We suggest, therefore, that the decrease in performance when testing on a different language is related to a higher extent to the shift in the output distribution across languages (i.e. DA judgments) than to the shift in the input features. This also explains the difficulty for training and predicting on English-German data where the distribution of DA scores is highly skewed with minimal variability in the quality range.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 364, |
|
"end": 372, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 409, |
|
"end": 418, |
|
"text": "(Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The output label distribution matters", |
|
"sec_num": "5.2.3" |
|
}, |
|
{ |
|
"text": "From our various experiments, one setting that stood out is that of English-German. We suggest that the difficulty for predicting quality for this language pair was exacerbated by the metric used for evaluation. Because of its sample-dependence, Pearson correlation can be more sensitive to the output distribution. In contrast, an error-based metric like RMSE will be less sensitive to these variations. To illustrate these effects, in Figure 6 , we show the hierarchical clustering of language directions obtained by using the metric value from training on one direction and testing on another one as a notion of distance. In subfigure (a), we observe the clusters based on Pearson correlation as shown in Table 1 . In subfigure (b), we observe the same clustering done based on RMSE. It should be noted that in the former, En-De is a clear outlier, whereas in the latter, we have a clustering that is more consistent with the general maturity of the language pairs: Ne-En and Si-En are low resource, Ro-En and Et-En are medium resource, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 437, |
|
"end": 445, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 708, |
|
"end": 715, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We explored the use of multilingual contextual representations to build state-of-the-art multilingual QE models. From our experiments, we observed that: 1) multilingual systems are always better than bilingual systems; 2) having multi-task models, which share parts of the model across languages and specialise others, does not necessarily yield better results; and 3) multilingual systems for QE generalise well across languages and are powerful even in zero-shot scenarios. We also contrasted the use of pre-trained representations which are costly to obtain, to the use of glass-box features which can be extracted from the NMT system. We observed that glass-box features are very comparable across languages, and training multilingual systems with them adds little value. Finally, we observed that the distribution of the output labels matters for the evaluation of QE. Table 5 : RMSE of BASE QE models for different portions of training data (%data). We underline the best RMSE for each %data setting.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 874, |
|
"end": 881, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "http://statmt.org/wmt20/ quality-estimation-task.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.statmt.org/wmt20/ quality-estimation-task.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The best results for BASE-BL are underlined and bold marks the best results across all models. Significant improvements over BASE BL are marked with *. We use the Hotelling-Williams test for dependent correlations to compute significance of the difference between correlations(Williams, 1959) with p-value < 0.05.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This method consists in performing several forward", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Marina Fomicheva, Fr\u00e9d\u00e9ric Blain and Lucia Specia were supported by funding from the Bergamot project (EU H2020 Grant No. 825303).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For completeness, Tables 4 and 5 report RMSE scores for our main experiments. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 78, |
|
"text": "Tables 4 and 5 report RMSE scores for our main experiments.", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Appendix", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Massively multilingual neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Roee", |
|
"middle": [], |
|
"last": "Aharoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1903.00089" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. arXiv preprint arXiv:1903.00089.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "597--610", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00288" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. Transac- tions of the Association for Computational Linguis- tics, 7:597-610.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A model of inductive bias learning", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Baxter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Journal of artificial intelligence research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "149--198", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Baxter. 2000. A model of inductive bias learning. Journal of artificial intelligence research, 12:149-198.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Confidence estimation for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Blatz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erin", |
|
"middle": [], |
|
"last": "Fitzgerald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simona", |
|
"middle": [], |
|
"last": "Gandrabur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Goutte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Kulesza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 20th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1220355.1220401" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto San- chis, and Nicola Ueffing. 2004a. Confidence esti- mation for machine translation. In Proceedings of the 20th International Conference on Computational Linguistics, Geneva, Switzerland.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Alberto Sanchis, and Nicola Ueffing. 2004b. Confidence estimation for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Blatz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erin", |
|
"middle": [], |
|
"last": "Fitzgerald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simona", |
|
"middle": [], |
|
"last": "Gandrabur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Goutte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Kulesza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto San- chis, and Nicola Ueffing. 2004b. Confidence estima- tion for machine translation. In COLING.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Xgboost: A scalable tree boosting system", |
|
"authors": [ |
|
{ |
|
"first": "Tianqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "785--794", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2939672.2939785" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, page 785-794, New York, NY, USA. Associa- tion for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Unsupervised cross-lingual representation learning at scale", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartikay", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.02116" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Multi-task learning for multiple language translation", |
|
"authors": [ |
|
{ |
|
"first": "Daxiang", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dianhai", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haifeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1723--1732", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P15-1166" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for mul- tiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 1723-1732, Beijing, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Multi-way, multilingual neural machine translation with a shared attention mechanism", |
|
"authors": [ |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Orhan Firat, Kyunghyun Cho, and Yoshua Ben- gio. 2016a. Multi-way, multilingual neural ma- chine translation with a shared attention mechanism. ArXiv, abs/1601.01073.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Zero-resource translation with multi-lingual neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Baskaran", |
|
"middle": [], |
|
"last": "Sankaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaser", |
|
"middle": [], |
|
"last": "Al-Onaizan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Fatos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Yarman-Vural", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman-Vural, and Kyunghyun Cho. 2016b. Zero-resource translation with multi-lingual neural machine translation. ArXiv, abs/1606.04164.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020. Unsupervised quality estimation for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Marina", |
|
"middle": [], |
|
"last": "Fomicheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuo", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Yankovskaya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fr\u00e9d\u00e9ric", |
|
"middle": [], |
|
"last": "Blain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Fishel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.10608" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Fr\u00e9d\u00e9ric Blain, Francisco Guzm\u00e1n, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020. Unsupervised quality estimation for neural machine translation. arXiv preprint arXiv:2005.10608.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Findings of the wmt 2019 shared tasks on quality estimation", |
|
"authors": [ |
|
{ |
|
"first": "Erick", |
|
"middle": [], |
|
"last": "Fonseca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Yankovskaya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Andr\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Martins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Fishel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Conference on Machine Translation", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erick Fonseca, Lisa Yankovskaya, Andr\u00e9 FT Martins, Mark Fishel, and Christian Federmann. 2019. Find- ings of the wmt 2019 shared tasks on quality esti- mation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 1-10.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning", |
|
"authors": [ |
|
{ |
|
"first": "Yarin", |
|
"middle": [], |
|
"last": "Gal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zoubin", |
|
"middle": [], |
|
"last": "Ghahramani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1050--1059", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a Bayesian Approximation: Representing Model Un- certainty in Deep Learning. In International Confer- ence on Machine Learning, pages 1050-1059.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Continuous measurement scales in human evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yvette", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alistair", |
|
"middle": [], |
|
"last": "Moffat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Zobel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measurement scales in human evaluation of machine translation. In Pro- ceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 33-41.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", |
|
"authors": [ |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikhil", |
|
"middle": [], |
|
"last": "Thorat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernanda", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Vi\u00e9gas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Wattenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Macduff", |
|
"middle": [], |
|
"last": "Hughes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "339--351", |
|
"other_ids": { |
|
"DOI": [ |
|
"https://www.mitpressjournals.org/doi/pdf/10.1162/tacl_a_00065" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda B. Vi\u00e9gas, Martin Wattenberg, Gre- gory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine transla- tion system: Enabling zero-shot translation. Trans- actions of the Association for Computational Lin- guistics, 5:339-351.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Unbabel's participation in the wmt19 translation quality estimation shared task", |
|
"authors": [ |
|
{ |
|
"first": "F\u00e1bio", |
|
"middle": [], |
|
"last": "Kepler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonay", |
|
"middle": [], |
|
"last": "Tr\u00e9nous", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcos", |
|
"middle": [], |
|
"last": "Treviso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Vera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ant\u00f3nio", |
|
"middle": [], |
|
"last": "G\u00f3is", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Amin Farajian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Ant\u00f3nio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andr\u00e9 Ft", |
|
"middle": [], |
|
"last": "Lopes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Martins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F\u00e1bio Kepler, Jonay Tr\u00e9nous, Marcos Treviso, Miguel Vera, Ant\u00f3nio G\u00f3is, M Amin Farajian, Ant\u00f3nio V Lopes, and Andr\u00e9 FT Martins. 2019. Unbabel's par- ticipation in the wmt19 translation quality estima- tion shared task. WMT 2019, page 80.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation", |
|
"authors": [ |
|
{ |
|
"first": "Hyun", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jong-Hyeok", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Seung-Hoon", |
|
"middle": [], |
|
"last": "Na", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Second Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "562--568", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hyun Kim, Jong-Hyeok Lee, and Seung-Hoon Na. 2017. Predictor-estimator using multilevel task learning with stack propagation for neural quality es- timation. In Proceedings of the Second Conference on Machine Translation, pages 562-568.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "3rd International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Transfer learning across low-resource, related languages for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Toan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Toan Q. Nguyen and David Chiang. 2017. Transfer learning across low-resource, related languages for neural machine translation. ArXiv, abs/1708.09803.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "An overview of multi-task learning in", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "deep neural networks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1706.05098" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Large-scale multitask learning for machine translation quality estimation", |
|
"authors": [ |
|
{ |
|
"first": "Kashif", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "558--567", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kashif Shah and Lucia Specia. 2016. Large-scale mul- titask learning for machine translation quality esti- mation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 558-567.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Findings of the wmt 2018 shared task on quality estimation", |
|
"authors": [ |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fr\u00e9d\u00e9ric", |
|
"middle": [], |
|
"last": "Blain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varvara", |
|
"middle": [], |
|
"last": "Logacheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ram\u00f3n", |
|
"middle": [], |
|
"last": "Astudillo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andr\u00e9", |
|
"middle": [ |
|
"F T" |
|
], |
|
"last": "Martins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "702--722", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Varvara Logacheva, Ram\u00f3n Astudillo, and Andr\u00e9 F. T. Martins. 2018a. Findings of the wmt 2018 shared task on quality es- timation. In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, pages 702-722, Belgium, Brussels. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Estimating the sentence-level quality of machine translation systems", |
|
"authors": [ |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Cancedda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Dymetman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Turchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nello", |
|
"middle": [], |
|
"last": "Cristianini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 13th Annual Conference of the European Association for Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "28--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucia Specia, Nicola Cancedda, Marc Dymetman, Marco Turchi, and Nello Cristianini. 2009. Estimat- ing the sentence-level quality of machine translation systems. In Proceedings of the 13th Annual Confer- ence of the European Association for Machine Trans- lation, pages 28-35, Barcelona, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Quality Estimation for Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolina", |
|
"middle": [], |
|
"last": "Scarton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gustavo", |
|
"middle": [ |
|
"Henrique" |
|
], |
|
"last": "Paetzold", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucia Specia, Carolina Scarton, and Gustavo Henrique Paetzold. 2018b. Quality Estimation for Machine Translation. Synthesis Lectures on Human Lan- guage Technologies. Morgan & Claypool Publish- ers.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Are we estimating or guesstimating translation quality?", |
|
"authors": [ |
|
{ |
|
"first": "Shuo", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6262--6267", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shuo Sun, Francisco Guzm\u00e1n, and Lucia Specia. 2020. Are we estimating or guesstimating translation qual- ity? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6262-6267, Online. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Is learning the n-th thing any easier than learning the first?", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sebastian Thrun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "640--646", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Thrun. 1996. Is learning the n-th thing any easier than learning the first? In Advances in neural information processing systems, pages 640-646.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Regression Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Evan", |
|
"middle": [ |
|
"James" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Williams", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1959, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evan James Williams. 1959. Regression Analysis, vol- ume 14. Wiley, New York, USA.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "Baseline QE model.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "Multi-task learning QE model (MTL) with a shared XLM-R encoder.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "Distribution of DA judgments for different language pairs.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"text": "Results of BASE QE models for various zeroshot and few-shot cross-lingual transfer settings. The solid lines represent the BASE ML models while the dashed lines are the BASE BL models.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"text": "Distribution of NMT log-probabilities for different language pairs", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"text": "Language hierarchical clustering according to the results of training on one language and testing on another. In subfigure (a) we plot the clustering based on Pearson correlation. In subfigure (b) we plot the same clustering based on RMSE. The y axis denotes the distance between language pairs according to each evaluation.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Model Strategy</td><td>TrainL</td><td/><td/><td/><td>TestL</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"8\">En-De En-Zh Et-En Ro-En Si-En Ne-En Ru-En Avg</td></tr><tr><td/><td/><td>En-De</td><td>0.39</td><td>(-0.17)</td><td>(-0.39)</td><td>(-0.51)</td><td>(-0.32)</td><td>(-0.51)</td><td>(-0.35)</td><td>0.34</td></tr><tr><td/><td/><td>En-Zh</td><td>(-0.02)</td><td>0.47</td><td>(-0.19)</td><td>(-0.36)</td><td>(-0.16)</td><td>(-0.24)</td><td>(-0.17)</td><td>0.50</td></tr><tr><td/><td/><td>Et-En</td><td>(-0.10)</td><td>(-0.08)</td><td>0.75</td><td>(-0.20)</td><td>(-0.07)</td><td>(-0.10)</td><td>(-0.08)</td><td>0.57</td></tr><tr><td>BASE</td><td>BL</td><td>Ro-En Si-En</td><td>(-0.10) (-0.13)</td><td>(-0.14) (-0.13)</td><td>(-0.02) (-0.08)</td><td>0.89 (-0.15)</td><td>(-0.02) 0.66</td><td>(-0.04) (-0.05)</td><td>(-0.08) (-0.07)</td><td>0.60 0.57</td></tr><tr><td/><td/><td>Ne-En</td><td>(-0.10)</td><td>(-0.11)</td><td>(-0.06)</td><td>(-0.08)</td><td>(-0.01)</td><td>0.77</td><td>(-0.08)</td><td>0.60</td></tr><tr><td/><td/><td>Ru-En</td><td>(-0.04)</td><td>(-0.09)</td><td>(-0.19)</td><td>(-0.26)</td><td>(-0.11)</td><td>(-0.16)</td><td>0.70</td><td>0.54</td></tr><tr><td/><td>ML</td><td>All</td><td>0.47*</td><td>0.49</td><td>0.78*</td><td>0.89</td><td>0.70*</td><td>0.78</td><td>0.73</td><td>0.69</td></tr><tr><td/><td>LS</td><td>All</td><td>0.45</td><td>0.48</td><td>0.77</td><td>0.89</td><td>0.66</td><td>0.79</td><td>0.72</td><td>0.68</td></tr><tr><td/><td>LA</td><td>All</td><td>0.47*</td><td>0.49</td><td>0.76</td><td>0.89</td><td>0.66</td><td>0.78</td><td>0.72</td><td>0.68</td></tr><tr><td>MTL</td><td>LS</td><td>En-*</td><td>0.41</td><td>0.46</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td/><td>LA</td><td>En-*</td><td>0.45</td><td>0.46</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td/><td>LS</td><td>*-En</td><td>-</td><td>-</td><td>0.78*</td><td>0.90</td><td>0.69</td><td>0.79</td><td>0.73</td><td>-</td></tr><tr><td/><td>LA</td><td>*-En</td><td>-</td><td>-</td><td>0.78*</td><td>0.89</td><td>0.69</td><td>0.78</td><td>0.73</td><td>-</td></tr><tr><td/><td colspan=\"2\">\u2021 BERT-BiRNN (Fomicheva et al., 2020)</td><td>0.27</td><td>0.37</td><td>0.64</td><td>0.76</td><td>0.47</td><td>0.55</td><td>-</td><td>-</td></tr><tr><td colspan=\"3\">\u2021 WMT20 QE Shared Task 1 Leaderboard (June 2020)</td><td>0.47</td><td>0.48</td><td>0.79</td><td>0.90</td><td>0.65</td><td>0.79</td><td>0.78</td><td>0.69</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "ML across all TestL is 0.69, 0.03 (4.5%) higher than the average score (0.66) of the best BASE-BL scores across all TestL (diagonal in the top part ofTable 1). The results clearly show that multilingual models generally outperform bilingual models, even when the latter are optimised individually for different TestL. An interesting observation inTable 1is that some BASE-BL models trained on different TrainL than TestL can perform almost as well as the models trained on the same TrainL as TestL. For example," |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"content": "<table><tr><td>0</td><td>BASE</td><td>ML</td><td>0.45</td><td>0.42</td><td>0.75</td><td>0.80</td><td>0.68</td><td>0.76</td><td>0.68</td><td>0.65</td></tr><tr><td>5</td><td>BASE</td><td>BL ML</td><td>0.13 0.38</td><td>0.39 0.44</td><td>0.65 0.74</td><td>0.70 0.85</td><td>0.58 0.67</td><td>0.63 0.76</td><td>0.63 0.71</td><td>0.53 0.65</td></tr><tr><td>10</td><td>BASE</td><td>BL ML</td><td>0.24 0.37</td><td>0.43 0.46</td><td>0.69 0.75</td><td>0.85 0.87</td><td>0.56 0.64</td><td>0.68 0.77</td><td>0.64 0.71</td><td>0.58 0.65</td></tr><tr><td>25</td><td>BASE</td><td>BL ML</td><td>0.27 0.40</td><td>0.45 0.46</td><td>0.70 0.75</td><td>0.87 0.88</td><td>0.61 0.66</td><td>0.72 0.76</td><td>0.70 0.71</td><td>0.62 0.66</td></tr><tr><td>50</td><td>BASE</td><td>BL ML</td><td>0.33 0.41</td><td>0.47 0.48</td><td>0.74 0.76</td><td>0.88 0.89</td><td>0.62 0.69</td><td>0.74 0.77</td><td>0.69 0.72</td><td>0.64 0.67</td></tr><tr><td>75</td><td>BASE</td><td>BL ML</td><td>0.39 0.46</td><td>0.47 0.49</td><td>0.75 0.78</td><td>0.88 0.89</td><td>0.64 0.70</td><td>0.76 0.78</td><td>0.70 0.71</td><td>0.66 0.69</td></tr><tr><td>100</td><td>BASE</td><td>BL ML</td><td>0.39 0.47</td><td>0.47 0.49</td><td>0.75 0.78</td><td>0.89 0.89</td><td>0.66 0.70</td><td>0.77 0.78</td><td>0.70 0.73</td><td>0.66 0.69</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "TestL % in-lang Model Strategy En-De En-Zh Et-En Ro-En Si-En Ne-En Ru-En Avg" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"content": "<table><tr><td>sampled in-language training data and train BASE-</td></tr><tr><td>ML on both sub-sampled in-language training data</td></tr><tr><td>and all training data in other language pairs. In</td></tr><tr><td>other words, we want to know whether multilin-</td></tr><tr><td>gual QE helps if we have limited or no training</td></tr><tr><td>data in our desired test language pair. Results are</td></tr><tr><td>shown in Table 2. For ease of visualisation, we</td></tr><tr><td>also plot the Pearson's correlation results against</td></tr><tr><td>the percentage of in-language training data in Fig-</td></tr><tr><td>ure 4. As seen in</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Results of BASE QE models for different portions of training data (%data). For BASE-ML, we train the models on subsampled training data in the test language pair and all training data in other language pairs. For BASE-BL, we train the models on only subsampled training data in the test language pair. We underline the best results for each %data setting." |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Pearson correlation for regression models based on glass-box features trained on each language pair and evaluated either on the same language pair or other language pairs. For testing on a different language pair we report the difference in Pearson correlation with respect to training and testing on the same language pair. For comparison we show the correlation individual best performing feature with no learning involved." |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"content": "<table><tr><td>TestL</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "RMSE for BASE and MTL QE models. We underline the best RMSE for BASE-BL and bold the best RMSE across all models." |
|
} |
|
} |
|
} |
|
} |