ACL-OCL / Base_JSON /prefixW /json /wmt /2020.wmt-1.117.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:41:31.195413Z"
},
"title": "The NiuTrans System for the WMT20 Quality Estimation Shared Task",
"authors": [
{
"first": "Chi",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {
"laboratory": "NLP Lab",
"institution": "Northeastern University",
"location": {
"settlement": "Shenyang",
"country": "China"
}
},
"email": ""
},
{
"first": "Hui",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "NLP Lab",
"institution": "Northeastern University",
"location": {
"settlement": "Shenyang",
"country": "China"
}
},
"email": ""
},
{
"first": "Kai",
"middle": [],
"last": "Feng",
"suffix": "",
"affiliation": {
"laboratory": "NLP Lab",
"institution": "Northeastern University",
"location": {
"settlement": "Shenyang",
"country": "China"
}
},
"email": ""
},
{
"first": "Chen",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "NLP Lab",
"institution": "Northeastern University",
"location": {
"settlement": "Shenyang",
"country": "China"
}
},
"email": ""
},
{
"first": "Zefan",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "NLP Lab",
"institution": "Northeastern University",
"location": {
"settlement": "Shenyang",
"country": "China"
}
},
"email": ""
},
{
"first": "Shiqin",
"middle": [],
"last": "Yan",
"suffix": "",
"affiliation": {
"laboratory": "NLP Lab",
"institution": "Northeastern University",
"location": {
"settlement": "Shenyang",
"country": "China"
}
},
"email": ""
},
{
"first": "Yingfeng",
"middle": [],
"last": "Luo",
"suffix": "",
"affiliation": {
"laboratory": "NLP Lab",
"institution": "Northeastern University",
"location": {
"settlement": "Shenyang",
"country": "China"
}
},
"email": ""
},
{
"first": "Chenglong",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "NLP Lab",
"institution": "Northeastern University",
"location": {
"settlement": "Shenyang",
"country": "China"
}
},
"email": ""
},
{
"first": "Xia",
"middle": [],
"last": "Meng",
"suffix": "",
"affiliation": {
"laboratory": "NLP Lab",
"institution": "Northeastern University",
"location": {
"settlement": "Shenyang",
"country": "China"
}
},
"email": ""
},
{
"first": "Nuo",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "NLP Lab",
"institution": "Northeastern University",
"location": {
"settlement": "Shenyang",
"country": "China"
}
},
"email": ""
},
{
"first": "Tong",
"middle": [],
"last": "Xiao",
"suffix": "",
"affiliation": {
"laboratory": "NLP Lab",
"institution": "Northeastern University",
"location": {
"settlement": "Shenyang",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {
"laboratory": "NLP Lab",
"institution": "Northeastern University",
"location": {
"settlement": "Shenyang",
"country": "China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the submissions of the NiuTrans Team to the WMT 2020 Quality Estimation Shared Task (Specia et al., 2020). We participated in all tasks and all language pairs. We explored the combination of transfer learning, multi-task learning and model ensemble. Results on multiple tasks show that deep transformer machine translation models and multilingual pretraining methods significantly improve translation quality estimation performance. Our system achieved remarkable results in multiple level tasks, e.g., our submissions obtained the best results on all tracks in the sentence-level Direct Assessment task 1 .",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the submissions of the NiuTrans Team to the WMT 2020 Quality Estimation Shared Task (Specia et al., 2020). We participated in all tasks and all language pairs. We explored the combination of transfer learning, multi-task learning and model ensemble. Results on multiple tasks show that deep transformer machine translation models and multilingual pretraining methods significantly improve translation quality estimation performance. Our system achieved remarkable results in multiple level tasks, e.g., our submissions obtained the best results on all tracks in the sentence-level Direct Assessment task 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Quality estimation (QE) evaluates the quality of machine translation output without human reference translations (Blatz et al., 2004) . It has a wide range of applications in post-editing and quality control for machine translation.",
"cite_spans": [
{
"start": 113,
"end": 133,
"text": "(Blatz et al., 2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We participated in all tasks and language pairs at the WMT 2020 QE shared task 2 , including sentence-level Direct Assessment tasks, word and sentence-level post-editing effort tasks, and document-level QE tasks. We investigated transfer learning and ensemble methods using recently proposed multilingual pre-trained models (Devlin et al., 2019; Conneau et al., 2020) as well as deep transformer models (Wang et al., 2019a) . Our main contributions are as follows:",
"cite_spans": [
{
"start": 324,
"end": 345,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 346,
"end": 367,
"text": "Conneau et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 403,
"end": 423,
"text": "(Wang et al., 2019a)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We apply multi-phase pretraining (Gururangan et al., 2020) methods under both high-and low-resource settings to QE tasks. 1 Our number of submissions exceeded the daily or total limit.",
"cite_spans": [
{
"start": 35,
"end": 60,
"text": "(Gururangan et al., 2020)",
"ref_id": null
},
{
"start": 124,
"end": 125,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 http://www.statmt.org/wmt20/ quality-estimation-task.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We incorporate deep transformer NMT models into QE models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a simple strategy to convert document-level tasks into word-and sentencelevel tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We explore effective ensemble methods for both word-and sentence-level predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Results on different level tasks show that our methods are very competitive. Our submissions achieved the best Pearson correlation on all language pairs of the sentence-level Direct Assessment task and the best results on English-Chinese post-editing effort tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present methods for the sentence-level Direct Assessment task in \u00a72. Then in \u00a73 and \u00a74, we describe our approaches to post-editing tasks and document-level tasks, respectively. System ensemble methods are discussed in \u00a75. We show the detail of our submissions and the results in \u00a76. We conclude and discuss future work in \u00a77.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The sentence-level Direct Assessment task is a new task where sentences are annotated with Direct Assessment (DA) scores by professional translators rather than post-editing labels. DA scores for each sentence are rated from 0 to 100, and participants are required to score sentences according to z-standardized DA scores. The DA task consists of seven tracks for different language pairs and one multilingual track. Submissions were evaluated in terms of Pearson's correlation metric for the DA prediction against human DA (z-standardized mean DA score, i.e., z-mean).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-level Direct Assessment Task",
"sec_num": "2"
},
{
"text": "This task contains 7K sentences for training and 1K sentences for development on each language pair, including sentence scores and word probabilities from the NMT models. The organizer also provided parallel data used to train the NMT models except for Russian-English, ranging from high resource (En-De, En-Zh), medium resource (Ro-En), to low-resource (Et-En, Ne-En, Si-En).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Resources",
"sec_num": "2.1"
},
{
"text": "In addition to the official data, we also used some multilingual pre-trained models for fine-tuning, including multilingual BERT 3 (mBERT) and XLM-RoBERTa 4 (XLM-R).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Resources",
"sec_num": "2.1"
},
{
"text": "Our baseline system was built upon unsupervised quality estimation methods proposed by Fomicheva et al. (2020) , which use out-of-box NMT models as sources of information for directly estimating translation quality. We utilized the output sentence probabilities from NMT models as indicators for QE tasks. Given the input sequence x, suppose the decoder generates an output sequence y = y 1 , . . . , y T of length T, the probability of generating y is factorized as:",
"cite_spans": [
{
"start": 87,
"end": 110,
"text": "Fomicheva et al. (2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Quality Estimation",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y|x, \u03b8) = T t=1 p (y t |y <t , x, \u03b8)",
"eq_num": "(1)"
}
],
"section": "Unsupervised Quality Estimation",
"sec_num": "2.2"
},
{
"text": "where \u03b8 represents model parameters. The output probability distribution p (y t | y <t , x, \u03b8) is produced by the decoder over the softmax function. We considered the sequence-level translation probability normalized by length:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Quality Estimation",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "TP = 1 T T t=1 log p (y t |y <t , x, \u03b8)",
"eq_num": "(2)"
}
],
"section": "Unsupervised Quality Estimation",
"sec_num": "2.2"
},
{
"text": "And the probability generated from perturbed parameters with dropout, we performed N times inference and used the averaged output:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Quality Estimation",
"sec_num": "2.2"
},
{
"text": "D-TP = 1 N N n=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Quality Estimation",
"sec_num": "2.2"
},
{
"text": "TP\u03b8 n (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Quality Estimation",
"sec_num": "2.2"
},
{
"text": "Fine-tuning pre-trained language models have become the foundation of today's NLP (Devlin et al., 2019; Conneau et al., 2020) . Recent advances in pre-trained multilingual language models lead to state-of-the-art results on QE tasks (Kim et al., 2019; Kepler et al., 2019a) . Similar to Gururangan et al. (2020), we continued training multilingual pre-trained models in both domain-and taskadaptive manners. Domain-adaptive pretraining uses a straightforward approach-we continue pretraining mBERT and XLM-R on the parallel corpora provided by the organizers, which is used to train the MT systems. Unlike the training data labeled with DA scores, the parallel data for different language pairs vary. The corpus of pre-trained language models also has the problem of data imbalance. In practice, we increased the training frequency of low-resource data.",
"cite_spans": [
{
"start": 82,
"end": 103,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 104,
"end": 125,
"text": "Conneau et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 233,
"end": 251,
"text": "(Kim et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 252,
"end": 273,
"text": "Kepler et al., 2019a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-phase Pretraining",
"sec_num": "2.3"
},
{
"text": "Task-adaptive pretraining refers to pretraining on the unlabeled training set for a given task. Compared to domain-adaptive pretraining, it uses a far smaller corpus, but the data is much more taskrelevant. We used the same models as the domainadaptive pretraining.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-phase Pretraining",
"sec_num": "2.3"
},
{
"text": "Similar to previous work (Kepler et al., 2019a; Yankovskaya et al., 2019) , we used models trained with the above methods as feature extractors for the sentence-level scoring tasks. We treated the scoring task as a regression task. Following standard practice, we added a separator token between source and target sentences and passed the pooled representation from the encoder to a task-specific feed-forward layer for classification. We used the z-standardized mean DA score as the ground truth and minimized the mean squared error during training.",
"cite_spans": [
{
"start": 25,
"end": 47,
"text": "(Kepler et al., 2019a;",
"ref_id": null
},
{
"start": 48,
"end": 73,
"text": "Yankovskaya et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "2.4"
},
{
"text": "This task consists of the word-and sentence-level tracks to evaluate post-editing effort. The wordlevel tasks predicts OK or BAD tags in both source and target sequences. It evaluates the Matthews correlation coefficient 5 (MCC) for tags. The sentencelevel task predicts HTER scores, which is the ratio between the number of edits needed and the reference translation length. It evaluates Pearson's correlation for the HTER prediction. There are two language pairs in both the word-and sentencelevel tasks, including English-German (En-De) and English-Chinese (En-Zh).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word and Sentence-Level Post-editing Effort Task",
"sec_num": "3"
},
{
"text": "The labeled data consists of 7K sentences for training and 1K sentences for development for each language pair. We used the additional parallel data provided by the organizers to train predictors, containing about 20M En-Zh sentence pairs and 23M En-De sentence pairs after pre-processing with the NiuTrans SMT toolkit (Xiao et al., 2012) . Pretrained language models include mBERT and XLM-R, were also used for Task 2.",
"cite_spans": [
{
"start": 319,
"end": 338,
"text": "(Xiao et al., 2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Resources",
"sec_num": "3.1"
},
{
"text": "The predictor-estimator architecture and its variants (Kim et al., 2017; Kepler et al., 2019b) had established state-of-the-art on WMT QE tasks. The system consists of a word prediction module (predictor) trained from additional large-scale parallel corpora and a quality estimation module (estimator) trained from quality-annotated data. For the sentence-level tasks and target-side wordlevel tasks, we employed the official bi-RNN predictor-estimator trained with OpenKiwi (Kepler et al., 2019b) as the baseline. Similar to Wang et al. 2019b, we used NMT models trained with back-translation as predictors.",
"cite_spans": [
{
"start": 54,
"end": 72,
"text": "(Kim et al., 2017;",
"ref_id": "BIBREF9"
},
{
"start": 73,
"end": 94,
"text": "Kepler et al., 2019b)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predictor-Estimator Models",
"sec_num": "3.2"
},
{
"text": "The original predictor and estimator use RNNs to encode the source and predict tags or scores. We also implemented two transformer-based predictors which replace the RNN with transformer (Vaswani et al., 2017) or deep transformer architectures (Wang et al., 2019a; Li et al., 2019) . We compared different tokenizing strategies such as word segmentation and byte pair encoding (BPE) (Sennrich et al., 2016) for all language pairs.",
"cite_spans": [
{
"start": 187,
"end": 209,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 244,
"end": 264,
"text": "(Wang et al., 2019a;",
"ref_id": "BIBREF17"
},
{
"start": 265,
"end": 281,
"text": "Li et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 383,
"end": 406,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predictor-Estimator Models",
"sec_num": "3.2"
},
{
"text": "The word-and sentence-level tasks are highly related to their annotations are commonly based on the HTER measure. We used a linear summation of sentence-level and target word-level objective losses as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task learning",
"sec_num": "3.3"
},
{
"text": "L = L mt.word + L mt.gap + L HT ER (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task learning",
"sec_num": "3.3"
},
{
"text": "where the components denote the loss of targetword, target-gap, and predictions for HTER score. We also trained models using source sentence and origin/post-edited MT output to predict the source-side word level tags:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task learning",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L SRC = L src\u2212mt + L src\u2212pe",
"eq_num": "(5)"
}
],
"section": "Multi-task learning",
"sec_num": "3.3"
},
{
"text": "4 Document-Level QE Task",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task learning",
"sec_num": "3.3"
},
{
"text": "This task aims to predict document-level quality scores as well as fine-grained annotations. Each document is annotated for translation errors with word span, severity, and error type 6 . Additionally, there are document-level scores (MQM scores) generated from the error annotations using the method proposed by Torr\u00f3n and Koehn (2016). The annotation task evaluates F1 scores on the gold annotations. The scoring task evaluates the Pearson's correlation between the gold and predicted MQM scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task learning",
"sec_num": "3.3"
},
{
"text": "We also used 35M WMT14 En-Fr parallel data to train our predictors for the annotation task except for the official 1,448 En-Fr documents. For the scoring task, we used pre-trained language models, including mBERT and XLM-R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Resources",
"sec_num": "4.1"
},
{
"text": "Following Kepler et al. (2019a), we treated the document-level annotation problem as a word-level task, with each sentence processed separately. We tokenized the training set and tagged each token with an OK/BAD tag. Specifically, each token was labeled as BAD if it contains any character in error spans. Besides token tags, we labeled a gap as BAD if a span begins and ends exactly in its borders. Otherwise, it was labeled as OK. During the test time, we mapped BAD tags to annotations in a single scheme: (a) continuous labels were merged into an error annotation; (b) individual labels were directly converted to error annotations. We ignored the severity information and always treated the error as the most frequent 'major'. We adopt the predictor-estimator architecture for this task. We implemented our predictors with deep transformers with relative position representation. The settings for model training are described in (Hu et al., 2020) . We also compared two tokenization schemes, including word-level tokenization and BPE. Similar to Task 2, we jointly trained our models with target-side word-level and word gap tasks.",
"cite_spans": [
{
"start": 934,
"end": 951,
"text": "(Hu et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Annotating Task",
"sec_num": "4.2"
},
{
"text": "We treated the document-level scoring task as a sentence-level task with a simple mapping scheme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Scoring Task",
"sec_num": "4.3"
},
{
"text": "We also ignored all critical and minor errors, and thus the MQM score for each document is calculated by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Scoring Task",
"sec_num": "4.3"
},
{
"text": "MQM = 100 \u00d7 (1 \u2212 W \u00d7 Count major Count word ) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Scoring Task",
"sec_num": "4.3"
},
{
"text": "where Count major and Count word are the count of major errors and total words, respectively. W denotes the weight of major errors, which is fixed at 5 in our experiments. Then we score each sentence according to the number of errors it contains:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Scoring Task",
"sec_num": "4.3"
},
{
"text": "Score sent = 100 \u2212 W \u00d7 Count major (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Scoring Task",
"sec_num": "4.3"
},
{
"text": "We applied the same fine-tuning strategies, as mentioned in Sec 2, to this task. During the test time, the count of errors was retrieved from the predicted score of all sentences. A document score is 0 if it has too many errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Scoring Task",
"sec_num": "4.3"
},
{
"text": "In addition to training models for each task, we also explored effective ensemble methods to combine outputs for different level tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Ensemble",
"sec_num": "5"
},
{
"text": "We used two approaches to ensemble word-level predictions for Task 2 and Task 3. Voting-Based Ensemble. Voting is the easiest method to combine predictions from multiple models. We chose the label with the most votes for each token as the output. Averaging-Based Ensemble. Similar to Kepler et al. (2019a) , we used Powell's conjugate direction method to optimize the task metric (MCC or F1 score) and learn the weights of different systems on the development set.",
"cite_spans": [
{
"start": 284,
"end": 305,
"text": "Kepler et al. (2019a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level ensemble",
"sec_num": "5.1"
},
{
"text": "We averaged the predicted scores from multiple models associated with different weights. The weights were also learned on the development set using Powell's method. We removed outliers from the candidate pool to make the prediction more stable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-level ensemble",
"sec_num": "5.2"
},
{
"text": "6 Experiments and Results",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-level ensemble",
"sec_num": "5.2"
},
{
"text": "Below we describe our systems for Task 1. Unsupervised baseline. As described in \u00a72, our baseline system leverages the output probabilities from NMT models to assess the sentence score. We performed 20 inference passes and set the dropout rate as 0.3 for all language pairs. Pretraining and fine-tuning. We experimented with different pre-trained models for multi-phase pretraining and fine-tuning. Specifically, we used three model settings, including mBERT-base cased (\u223c200M parameters), XLM-R-base (\u223c300M parameters), and XLM-R-large (\u223c600M parameters). Systems for the first six language pairs in Table 2 were pre-trained on the parallel data while the system for Ru-En was only trained on the task data. We combined predictions on the first six language pairs as the submission to the multilingual task.",
"cite_spans": [],
"ref_spans": [
{
"start": 601,
"end": 609,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Task 1",
"sec_num": "6.1"
},
{
"text": "As shown in Table 1 , unsupervised QE indicators obtained competitive results using sequencelevel probability from NMT models. Disturbing the model parameters improves the performance of all language pairs. We did not combine the predictions from unsupervised methods into our submissions. with pretraining and fine-tuning. We combined predictions from 10 pre-trained models with three different settings: mBERT, XLM-R-base, and XLM-R-large. We only report the results with the highest Pearson (r) correlation on the test data. We observe that larger models consistently outperformed small ones for all language pairs. Besides, ensemble methods significantly improved the performance on the test set. It also shows that the quality estimation of high-resource languages performs far worse than low-resource languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Task 1",
"sec_num": "6.1"
},
{
"text": "For En-Zh, we trained 5-10 single models for each setting: token-based bi-RNNs (RNN-Token), token-based transformer (Trans-Token), BPEbased transformer (Trans-BPE), and BPE-based deep transformer with 25 encoder layers (Deep Trans). For En-De, we created three systems using the same architectures as En-Zh except for the deep transformer. We applied the multi-task learning strategies to the target-side word-level and sentence-level tasks described as \u00a73. Table 3 shows the results on the English-Chinese word-level task. Deep transformer and BPE tokenization bring the most gains to both the targetside MCC. Results on the English-German task are listed in Table 4 . It shows that our ensemble methods are effective in boosting performance across different tasks. 6.3 Task 3 Table 5 shows the results obtained by three different models and the ensemble on the annotation task. BPE brings about 0.03 points improvements of F1 scores on both the validation and test sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 458,
"end": 465,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 660,
"end": 667,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 778,
"end": 785,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Task 2",
"sec_num": "6.2"
},
{
"text": "The system ensemble further pushes the score by about 0.02. Table 5 also lists the results of the scoring task. We report the results of two pretraining methods and their ensemble on the test data. XLM-R outperformed the mBERT model by 0.04 points in the Pearson correlation, while the ensemble brought a slight benefit.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Task 2",
"sec_num": "6.2"
},
{
"text": "This paper describes the submissions of the Niu-Trans Team to the WMT 2020 QE task. We explored the combination of transfer learning, multitask learning, and model ensemble. Different level tasks show that deep transformer NMT models and multilingual pretraining methods significantly boost QE models' performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Although our system achieved impressive results in all tasks and language pairs, there are still many problems. For instance, the translation quality estimation of low-resource languages performs much better than that of high-resource. It raises the concern of whether our model learns the evaluation criteria instead of memorizing data, as suggested in Sun et al. (2020) . Besides, strong NMT models help quality estimation, but can we use QE models to improve NMT systems' learning?",
"cite_spans": [
{
"start": 354,
"end": 371,
"text": "Sun et al. (2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We plan to answer these questions in the future and promote the joint improvement of QE and NMT models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://huggingface.co/ bert-base-multilingual-cased 4 https://github.com/facebookresearch/ XLM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://en.wikipedia.org/wiki/ Matthews_correlation_coefficient",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.qt21.eu/mqm-definition/ definition-2015-12-30.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by the National Science Foundation of China (Nos. 61876035 and 61732005) and the National Key R&D Program of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "China (No.2019QY1801). The authors would like to thank anonymous reviewers for their comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Confidence estimation for machine translation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blatz",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Simona",
"middle": [],
"last": "Gandrabur",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kulesza",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "315--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto San- chis, and Nicola Ueffing. 2004. Confidence esti- mation for machine translation. In COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pages 315-321, Geneva, Switzerland. COLING.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, F. Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL-HLT.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020. Unsupervised quality estimation for neural machine translation",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
}
],
"year": null,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Fr\u00e9d\u00e9ric Blain, Francisco Guzm\u00e1n, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020. Unsupervised quality estimation for neural machine translation. ArXiv, abs/2005.10608.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "2020. Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The NiuTrans system for WNGT 2020 efficiency task",
"authors": [
{
"first": "Chi",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yinqiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ye",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yanyang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chenglong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourth Workshop on Neural Generation and Translation",
"volume": "",
"issue": "",
"pages": "204--210",
"other_ids": {
"DOI": [
"10.18653/v1/2020.ngt-1.24"
]
},
"num": null,
"urls": [],
"raw_text": "Chi Hu, Bei Li, Yinqiao Li, Ye Lin, Yanyang Li, Chen- glong Wang, Tong Xiao, and Jingbo Zhu. 2020. The NiuTrans system for WNGT 2020 efficiency task. In Proceedings of the Fourth Workshop on Neural Gen- eration and Translation, pages 204-210, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unbabel's participation in the WMT19 translation quality estimation shared task",
"authors": [
{
"first": "",
"middle": [],
"last": "Lopes",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lopes, and Andr\u00e9 F. T. Martins. 2019a. Unbabel's participation in the WMT19 translation quality esti- mation shared task. In WMT.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "OpenKiwi: An open source framework for quality estimation",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Kepler",
"suffix": ""
},
{
"first": "Jonay",
"middle": [],
"last": "Tr\u00e9nous",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vera",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Kepler, Jonay Tr\u00e9nous, Marcos Treviso, Miguel Vera, and Andr\u00e9 F. T. Martins. 2019b. OpenKiwi: An open source framework for quality estimation. In ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation",
"authors": [
{
"first": "Hyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jong-Hyeok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Seung-Hoon",
"middle": [],
"last": "Na",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "562--568",
"other_ids": {
"DOI": [
"10.18653/v1/W17-4763"
]
},
"num": null,
"urls": [],
"raw_text": "Hyun Kim, Jong-Hyeok Lee, and Seung-Hoon Na. 2017. Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In Proceedings of the Second Con- ference on Machine Translation, pages 562-568, Copenhagen, Denmark. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "QE BERT: Bilingual BERT using multi-task learning for neural quality estimation",
"authors": [
{
"first": "Hyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Joon-Ho",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Hyun-Ki",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Seung-Hoon",
"middle": [],
"last": "Na",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "85--89",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5407"
]
},
"num": null,
"urls": [],
"raw_text": "Hyun Kim, Joon-Ho Lim, Hyun-Ki Kim, and Seung- Hoon Na. 2019. QE BERT: Bilingual BERT using multi-task learning for neural quality estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 85-89, Florence, Italy. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The niutrans machine translation systems for wmt19",
"authors": [
{
"first": "Bei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yinqiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jiqiang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ziyang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zeyang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Hexuan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Tengbo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yanyang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2019,
"venue": "WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bei Li, Yinqiao Li, Chen Xu, Y. Lin, Jiqiang Liu, H. Liu, Ziyang Wang, Y. Zhang, N. Xu, Zeyang Wang, Kai Feng, Hexuan Chen, Tengbo Liu, Yanyang Li, Qiang Wang, Tong Xiao, and Jingbo Zhu. 2019. The niutrans machine translation sys- tems for wmt19. In WMT.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Findings of the wmt 2020 shared task on quality estimation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Andr\u00e9 Ft",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Marina Fomicheva, Er- ick Fonseca, Vishrav Chaudhary, Francisco Guzm\u00e1n, and Andr\u00e9 FT Martins. 2020. Findings of the wmt 2020 shared task on quality estimation. In Proceed- ings of the Fifth Conference on Machine Translation: Shared Task Papers.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Are we estimating or guesstimating translation quality?",
"authors": [
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6262--6267",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.558"
]
},
"num": null,
"urls": [],
"raw_text": "Shuo Sun, Francisco Guzm\u00e1n, and Lucia Specia. 2020. Are we estimating or guesstimating translation qual- ity? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6262-6267, Online. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Machine translation quality and post-editor productivity",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "S\u00e1nchez Torr\u00f3n",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina S\u00e1nchez Torr\u00f3n and Philipp Koehn. 2016. Ma- chine translation quality and post-editor productivity. In In Proceedings of AMTA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning deep transformer models for machine translation",
"authors": [
{
"first": "Qiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Changliang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Derek",
"middle": [
"F"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Lidia",
"middle": [
"S"
],
"last": "Chao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1810--1822",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1176"
]
},
"num": null,
"urls": [],
"raw_text": "Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. 2019a. Learning deep transformer models for ma- chine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 1810-1822, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Niutrans submission for ccmt19 quality estimation task",
"authors": [
{
"first": "Ziyang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hexuan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Zeyang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2019,
"venue": "CCMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziyang Wang, Hui Liu, Hexuan Chen, Kai Feng, Zeyang Wang, Bei Li, Chen Xu, Tong Xiao, and Jingbo Zhu. 2019b. Niutrans submission for ccmt19 quality estimation task. In CCMT.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Niutrans: An open source toolkit for phrasebased and syntax-based machine translation",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2012,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong Xiao, Jingbo Zhu, Hao Zhang, and Qiang Li. 2012. Niutrans: An open source toolkit for phrase- based and syntax-based machine translation. In ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Quality estimation and translation metrics via pre-trained word and sentence embeddings",
"authors": [
{
"first": "Elizaveta",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "Andre",
"middle": [],
"last": "T\u00e4ttar",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizaveta Yankovskaya, Andre T\u00e4ttar, and Mark Fishel. 2019. Quality estimation and translation metrics via pre-trained word and sentence embeddings. In ACL.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"4\">: Pearson (r) correlation between unsupervised</td></tr><tr><td colspan=\"4\">methods and human DA judgements on the validation</td></tr><tr><td colspan=\"4\">data for sentence-level DA tasks. We mark improve-</td></tr><tr><td colspan=\"2\">ments of D-TP by percentage.</td><td/><td/></tr><tr><td>Pair</td><td colspan=\"3\">mBERT XLM-R Ensemble</td></tr><tr><td>En-De</td><td>0.516</td><td>0.555</td><td>0.562</td></tr><tr><td>En-Zh</td><td>0.512</td><td>0.533</td><td>0.551</td></tr><tr><td>Ro-En</td><td>0.888</td><td>0.911</td><td>0.917</td></tr><tr><td>Et-En</td><td>0.809</td><td>0.820</td><td>0.833</td></tr><tr><td>Ne-En</td><td>0.816</td><td>0.821</td><td>0.830</td></tr><tr><td>Si-En</td><td>0.607</td><td>0.670</td><td>0.698</td></tr><tr><td>Ru-En</td><td>0.728</td><td>0.796</td><td>0.816</td></tr><tr><td>Multilingual</td><td>-</td><td>-</td><td>0.732</td></tr></table>"
},
"TABREF2": {
"text": "Pearson (r) correlation between pretraining methods and human DA judgements on the test data for sentence-level DA tasks. We only present the results of XLM-R-large for the second method.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF3": {
"text": "lists the results of the system ensemble",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>System</td><td colspan=\"2\">Target Source</td></tr><tr><td>RNN-word</td><td>0.467</td><td>-</td></tr><tr><td>Transformer-word</td><td>0.511</td><td>-</td></tr><tr><td>Transformer-subword</td><td>0.542</td><td>0.292</td></tr><tr><td colspan=\"2\">Deep Transformer-subword 0.545</td><td>-</td></tr><tr><td>Ensemble</td><td>0.610</td><td>0.308</td></tr></table>"
},
"TABREF4": {
"text": "Results of the English-Chinese post-editing task. 'word' denotes the system uses word-level tokenization.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>System</td><td colspan=\"2\">Target Source</td></tr><tr><td>RNN-word</td><td>0.395</td><td>-</td></tr><tr><td>Transformer-word</td><td>0.413</td><td>-</td></tr><tr><td colspan=\"2\">Transformer-subword 0.451</td><td>0.285</td></tr><tr><td>Ensemble</td><td>0.500</td><td>0.347</td></tr></table>"
},
"TABREF5": {
"text": "Results of the English-German post-editing tasks.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF7": {
"text": "Results of the document-level tasks. The deep transformer model contains 24 encoder layers and 6 decoder layers.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>"
}
}
}
}