ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2021.eval4nlp-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:44.522383Z"
},
"title": "Error Identification for Machine Translation with Metric Embedding and Attention",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Information and Communications Technology",
"location": {
"addrLine": "3-5 Hikaridai, Seika-cho, Soraku-gun",
"postCode": "619-0289",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Atsushi",
"middle": [],
"last": "Fujita",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Information and Communications Technology",
"location": {
"addrLine": "3-5 Hikaridai, Seika-cho, Soraku-gun",
"postCode": "619-0289",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Benjamin",
"middle": [],
"last": "Marie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Information and Communications Technology",
"location": {
"addrLine": "3-5 Hikaridai, Seika-cho, Soraku-gun",
"postCode": "619-0289",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Quality Estimation (QE) for Machine Translation has been shown to reach relatively high accuracy in predicting sentence-level scores, relying on pretrained contextual embeddings and human-produced quality scores. However, the lack of explanations along with decisions made by end-to-end neural models makes the results difficult to interpret. Furthermore, word-level annotated datasets are rare due to the prohibitive effort required to perform this task, while they could provide interpretable signals in addition to sentence-level QE outputs. In this paper, we propose a novel QE architecture which tackles both the wordlevel data scarcity and the interpretability limitations of recent approaches. Sentence-level and word-level components are jointly pretrained through an attention mechanism based on synthetic data and a set of MT metrics embedded in a common space. Our approach is evaluated on the Eval4NLP 2021 shared task and our submissions reach the first position in all language pairs. The extraction of metricto-input attention weights show that different metrics focus on different parts of the source and target text, providing strong rationales in the decision-making process of the QE model.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Quality Estimation (QE) for Machine Translation has been shown to reach relatively high accuracy in predicting sentence-level scores, relying on pretrained contextual embeddings and human-produced quality scores. However, the lack of explanations along with decisions made by end-to-end neural models makes the results difficult to interpret. Furthermore, word-level annotated datasets are rare due to the prohibitive effort required to perform this task, while they could provide interpretable signals in addition to sentence-level QE outputs. In this paper, we propose a novel QE architecture which tackles both the wordlevel data scarcity and the interpretability limitations of recent approaches. Sentence-level and word-level components are jointly pretrained through an attention mechanism based on synthetic data and a set of MT metrics embedded in a common space. Our approach is evaluated on the Eval4NLP 2021 shared task and our submissions reach the first position in all language pairs. The extraction of metricto-input attention weights show that different metrics focus on different parts of the source and target text, providing strong rationales in the decision-making process of the QE model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Quality Estimation (QE) for Machine Translation (MT) (Blatz et al., 2004; Quirk, 2004; Specia et al., 2009) aims at providing quality scores or labels to MT output when translation references are not available. Sentence-level QE is usually conducted using human produced direct assessments (DA) (Graham et al., 2013) or post-edits. The latter allows to derive token-level quality indicators such as good and bad tags (Fonseca et al., 2019; . Token-level QE is particularly useful for applications such as source preediting or focused MT post-editing, but requires high-quality fine-grained annotated data for supervised learning. Furthermore, token-level quality indicators can be seen as explanations for sentencelevel scores, whether given by humans or automatically produced. However, explainability of QE models decisions is obscured by contemporary approaches relying on large data-driven neural-based models, making use of pretrained contextual language models (LM) such as BERT (Devlin et al., 2019) and XLM (Conneau and Lample, 2019) , albeit showing steady performance increase as reported in the QE shared tasks (Fonseca et al., 2019; ). Yet, the QE layers and architectures are rarely investigated, neither for performance nor for interpretability purposes, and the center of attention is mainly on large pretrained models and generating additional (synthetic) training corpora.",
"cite_spans": [
{
"start": 53,
"end": 73,
"text": "(Blatz et al., 2004;",
"ref_id": "BIBREF0"
},
{
"start": 74,
"end": 86,
"text": "Quirk, 2004;",
"ref_id": "BIBREF23"
},
{
"start": 87,
"end": 107,
"text": "Specia et al., 2009)",
"ref_id": "BIBREF33"
},
{
"start": 295,
"end": 316,
"text": "(Graham et al., 2013)",
"ref_id": "BIBREF10"
},
{
"start": 417,
"end": 439,
"text": "(Fonseca et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 985,
"end": 1006,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 1015,
"end": 1041,
"text": "(Conneau and Lample, 2019)",
"ref_id": "BIBREF3"
},
{
"start": 1122,
"end": 1144,
"text": "(Fonseca et al., 2019;",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a novel QE architecture which encompasses a metric-to-input attention mechanism allowing for several extensions of the habitual QE approach. First, since sentence-level QE scores are usually obtained with surface-level MT metrics computed between translation outputs and human produced references or post-edits such as HTER (Snover et al., 2006) , we propose to make use of several metrics simultaneously in order to model translation errors at various granularities, i.e. at the character, token, and phrase levels. Second, we design a metric embeddings model which represents metrics in their own space through a dedicated set of learnable parameters, allowing for straightforward extensions of the number and type of metrics. Third, by employing an attention mechanism between metric embeddings and bilingual input representations, the metric-to-input attention weights indicate where each metric focuses given an input sequence, increasing the interpretability of the QE components. We conduct a set of experiments on the Eval4NLP 2021 shared task dataset (Fomicheva et al., 2021) using only the training data along with sentence-level scores officially released for the tasks (illustrated in Figure 1 ). In addition, we Source Religioon pakub vaimu puhastamiseks teatud vahendeid .",
"cite_spans": [
{
"start": 350,
"end": 371,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF31"
},
{
"start": 1086,
"end": 1110,
"text": "(Fomicheva et al., 2021)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 1223,
"end": 1231,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Religion offers certain means of cleansing the spirit .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT",
"sec_num": null
},
{
"text": "Religion offers certain means of cleansing the spirit .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PE",
"sec_num": null
},
{
"text": "Sentence-level scores: DA 0.905 -chrF 1.0 -TER 0.0 -BLEU 1.0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PE",
"sec_num": null
},
{
"text": "Source T\u00e4nu Uku kalastamiskirele p\u00e4\u00e4seb \u00d5nne 13 maja p\u00f5lengust .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PE",
"sec_num": null
},
{
"text": "Thanks to the breath of fresh fishing , 13 houses are escaped from contempt . PE Thanks to Uku 's passion for fishing, the house at \u00d5nne 13 is saved from fire.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT",
"sec_num": null
},
{
"text": "Sentence-level scores: DA 0.132 -chrF 0.366 -TER 0.667 -BLEU 0.0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT",
"sec_num": null
},
{
"text": "Figure 1: Samples of source sentences, automatic translations and human post-editions, along with direct assessment (DA) scores, taken from the Eval4NLP 2021 shared task Estonian-English validation set representing high and low quality translations. Additional metrics are presented, namely chrF, TER and BLEU, to illustrate variations related to metrics granularity. Green and red colors are tokens annotated with classes 0 and 1 respectively. produce a large synthetic corpus for QE pretraining using publicly available resources. The contributions of our work are the following: (i) a novel QE architecture using metric embeddings and attention-based interpretable neural components allowing for unsupervised token-level quality indicators, (ii) an extensible framework designed for unrestricted sentence-level QE scores or labels where new metrics can be added through finetuning, (iii) the reproducibility guaranteed by the use of publicly available datasets, tools, and models, and (iv) word and sentence-level QE results on par or outperforming top-ranked approaches based on the official Eval4NLP 2021 shared task results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT",
"sec_num": null
},
{
"text": "The remainder of this paper is organized as follows. In Section 2, we introduce some background in QE based on contextual language model, followed in Section 3 with the detailed implementation of the proposed model using metric embedding and attention. In Section 4, the experimental setup is presented, including the data and tools used, as well as the training procedure of our models. Section 5 contains the results obtained in our experiments along with their analysis and interpretation. A comparison of our method and results with previous work is made in Section 6. Finally, we conclude and suggest future research directions in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT",
"sec_num": null
},
{
"text": "Current state-of-the-art QE approaches are commonly based on sentence encoders taking as input source-translation pairs (Ranasinghe et al., 2020a; Wang et al., 2020; Rubino, 2020) . Encoders are usually contextual LMs pretrained on large amount of multilingual data. Existing QE implementations commonly rely on additional layers added on top of a pretrained LM, which enables multi-task learning for word and sentence-level QE.",
"cite_spans": [
{
"start": 120,
"end": 146,
"text": "(Ranasinghe et al., 2020a;",
"ref_id": "BIBREF24"
},
{
"start": 147,
"end": 165,
"text": "Wang et al., 2020;",
"ref_id": "BIBREF39"
},
{
"start": 166,
"end": 179,
"text": "Rubino, 2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Pretraining of contextual LMs is done by optimizing a prediction function given input sequences of tokens containing randomly masked tokens, or tokens randomly replaced by other tokens sampled from the vocabulary. Formally, given an input sequence Z of n tokens z 1:n , corresponding word (or subword) embeddings x 1:n with dimension d (x 1:n \u2208 R n\u00d7d ) are learned, and output contextual embeddings h l 1:n \u2208 R n\u00d7d are computed at each layer l \u2208 [1, L] \u2282 N of a Transformer encoder (Vaswani et al., 2017) . Usually based on the output of the last encoder layer, the model optimizes the following loss function:",
"cite_spans": [
{
"start": 482,
"end": 504,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "(s, t) = \u2212E r\u223c[1,n] log P (z r |z r ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "where z r are randomly sampled tokens from z to be masked or replaced andz r are the remaining tokens from z with r \u2208 [1, n] \u2282 N. To perform QE, QE-specific layers are commonly added on top of pretrained contextual LMs, being fed with contextual token embeddings from the topmost (i.e., L-th) layer of the LM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "For sentence-level QE the specific component is a regression head formalized by For token-level QE a classification head is implemented as y t 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "y s = \u03c3(\u03c6(h L 1:n )\u2022 W s + b s ), where y s \u2208 [0, 1] \u2282 R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "n = softmax (h L 1:n \u2022 W t + b t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": ", where y t 1:n \u2208 R n\u00d7|C| with C the set of word-level QE classes, W t and b t are trainable parameters of the linear output layer. The output y t 1:n of this QE component is a vector of labels indicating the translation quality of corresponding input tokens. Source tokens are annotated according to the accuracy of their translation, while annotations of target tokens also take into account their position in the target sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Multiple losses are then computed, one for the sentence-level and one for each word-level outputs (source and target tokens), based on gold labels to train (or finetune) the contextual LM and the QE model in an end-to-end fashion using backpropagation (Kim et al., 2017; Lee, 2020; Rubino and Sumita, 2020) . Commonly used losses are cross-entropy and mean-squared error for classification and regression respectively.",
"cite_spans": [
{
"start": 252,
"end": 270,
"text": "(Kim et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 271,
"end": 281,
"text": "Lee, 2020;",
"ref_id": "BIBREF14"
},
{
"start": 282,
"end": 306,
"text": "Rubino and Sumita, 2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "However, this approach has limitations While the token-level QE implementation makes use of each input token representation in context thanks to the pretrained LM, the sentence-level QE components relies only on the pooled representation of the input sequence. This approach drastically limits the amount of information flowing through the sentence-level specific set of layers and may force the network to focus more on cues and data artifacts which correlate with QE scores, instead of encoding translation-related features from source and target inputs . These findings corroborate with the empirical observation made by Kepler et al. (2019) , where the authors obtained the best word-level QE results using BERT and ignoring target language features when predicting source quality labels and vice-versa. Additionally, most recent QE approaches do not allow for the interpretability of sentence-level QE predictions at test time and leads to the current state of QE as a set of black-box components. Furthermore, token-level error annotations is costly to produce.",
"cite_spans": [
{
"start": 624,
"end": 644,
"text": "Kepler et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Motivated by the limitations to contextual LM based QE, we propose a novel architecture employing metric embeddings and attention, which is computed between the contextual embeddings and the embedded QE criteria of the supervised learning task, namely MT automatic metrics or direct assessment scores provided by human annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "The metric embedding matrix E \u2208 R g\u00d7d , randomly initialized at the beginning of training, is added on top of the pretrained LM to model metrics in their own space, with a predefined set of sentence-level metrics M = {m 1 , . . . , m g }. Each metric is initially represented as a one-hot vector, noted m j \u2208 R g with j \u2208 [1, g] \u2282 N. Its corresponding embedding is retrieved with m j \u2022 E, forming the query used in the attention mechanism (eqn. 1):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "Q i,j = (m j \u2022 E) \u2022 W Q i (1) where i \u2208 [1, u] \u2282 N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "is the head index from a predefined number of heads, Q i,j \u2208 R d is the metric embedding corresponding to the one-hot vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "m j , W Q i \u2208 R d\u00d7(d/u)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "is a matrix of learnable parameters projecting the metric embedding into the dimensionality of the attention head (d/u). Note that we present the query computation for a single metric but our implementation allows several metrics to be packed into a single query, sharing the parameter matrix W Q i (biases are omitted for the sake of simplicity).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "Keys and values which are the two other components of the attention mechanism, noted K i and V i respectively, are computed based on the output of the topmost layer of the pretrained LM, which is first fed into a position-wise feed-forward layer following (eqn. 2):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "ff 1:n = ReLU (h L 1:n \u2022 W s,D 1 ) \u2022 W s,D 2 K i = ff 1:n \u2022 W K i , V i = ff 1:n \u2022 W V i (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "where d/u) are the parameter matrices for the keys and values respectively, W s,D 1 \u2208 R d\u00d7b and W s,D 2 \u2208 R b\u00d7d are parameter matrices of the linear layers with dimensionality b and a ReLU activation function in between, leading to ff 1:n \u2208 R n\u00d7d .",
"cite_spans": [
{
"start": 6,
"end": 10,
"text": "d/u)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "W K i and W V i \u2208 R d\u00d7(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "Metrics to tokens attention weights aim to represent the focus made by a given metric on specific parts of the input sequences. These attention weights are computed between the embedding of a metric and the contextually encoded input tokens (eqn. 3):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "\u03b1 i,j,1:n = \u03c3 Q i,j K T i d/u (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "where \u03c3 is the sigmoid function. Note that the common way to compute attention weights, as presented in Vaswani et al. (2017) , relies on softmax which is based on the exponential function, wellsuited for tasks such as machine translation as it results in few alignments between tokens involved in the attention mechanism. However, in the case of unsupervised sequence labeling such as token-level QE without annotated data, zero to many tokens may influence sentence-level scores given a metric. Thus, to allow more flexibility in the distribution of attention weights over input tokens and following the approach presented in (Rei and S\u00f8gaard, 2018) , we replaced softmax by sigmoid .",
"cite_spans": [
{
"start": 104,
"end": 125,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF38"
},
{
"start": 628,
"end": 651,
"text": "(Rei and S\u00f8gaard, 2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "Sentence-level scores are obtained for each metric with the weighted sum of value vectors for each attention head (eqn. 4):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "attn i,j = \u03b1 i,j,1:n V i ,",
"eq_num": "(4)"
}
],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "where attn i,j \u2208 R (d/u) , before concatenating the output of each head and projecting the result back in the dimensionality of the model (eqn. 5):",
"cite_spans": [
{
"start": 19,
"end": 24,
"text": "(d/u)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "y s j = (attn 1,j \u2295 . . . \u2295 attn u,j ) \u2022 W O (5) with W O \u2208 R d\u00d7d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "Finally, we project y s j from the model dimensionality to a single score through a metric specific linear layer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "y s j = y s j \u2022 W s j with W s j \u2208 R d\u00d71 and y s j \u2208 [0, 1] \u2282 R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "Token-level QE scores are computed by using the attention weights (see eqn. 3) followed by three transformations: attention heads combination through a linear transformation, concatenation of token embeddings and combined attention heads, combination of metrics through a final linear transformation (eqn. 6):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y t j,1:n = \u03b1 j,1:n \u2022 W t,H y t 1:n = (y t 1:n \u2295 h L 1:n ) \u2022 W t,O",
"eq_num": "(6)"
}
],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "where \u03b1 j,1:n = (\u03b1 1,j,1:n \u2295 . . . \u2295 \u03b1 u,j,1:n ), y t 1:n = (y t 1,1:n \u2295 . . . \u2295 y t g,1:n ), W t,H \u2208 R u\u00d71 and W t,O \u2208 R (d+g)\u00d71 are parameter matrices of linear layers, leading to y t \u2208 [0, 1] \u2282 R for each token in the input sequence z 1:n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "The learning process allows for supervised or unsupervised token-level QE. Note that all learnable parameters for the sentence-level QE components except W s j are shared between metrics, including the metric embeddings matrix E. We believe that such an approach enables to capture translation errors at different granularities according to the specificity of each metric, e.g., characters, tokens and phrases, while keeping a reasonable total amount of learnable parameters. The loss functions for sentence-level QE are mean-squared Figure 2 : Architecture of the metric embeddings and attention mechanism. Shaded elements, curved arrows and \u2295 are parameters of the model, i and j are the attention head and the metric indexes respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 534,
"end": 542,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "sentencelevel scores \u2295 { y 1 s , ... , y g s } \u2211 \u03b1 i , j , 1 :n V i XLMR \u03b1 i , j ,1 : n Z ={ z 1, ... , z n } contextual embeddings pretrained LM input sequence M ={m 1 ,... , m g } E metric embeddings onehot metrics Q i , j K i tokenlevel scores { y 1 t , ... , y n t } \u2295 V i h 1 : n L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "error while the losses for token-level QE are crossentropy. The final loss is obtained by linearly combining all losses computed for each output of the model. The general architecture of our QE model is illustrated in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 226,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "Supervised learning is conducted by computing losses according to each output of the model and their corresponding gold labels from the training data. Thus, for the sentence-level QE layers, we compute one loss per metric (Mean Squared Error), while for the token-level QE layers, if tokenlevel annotations are available, two losses allow to optimize the model for source and target tokens separately (Cross-entropy).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "Unsupervised learning is conducted when token-level annotations are not available, which is one of the objectives in the constrained task of Eval4NLP 2021. In this case, only sentence-level losses are used to optimize the parameter of the model through backpropagation. Following the guidelines of the shared task, we do not use the direct assessment annotations made by humans at the word-level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Embedding and Attention",
"sec_num": "3"
},
{
"text": "This section presents our experimental setup, including the pretrained models, the datasets and the training procedure. All pretrained models and scripts used in our experiments are based on Py-Torch (Paszke et al., 2019) and all computations are conducted on NVIDIA V100 GPUs with CUDA v10.2.",
"cite_spans": [
{
"start": 200,
"end": 221,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Two types of pretrained models were necessary to conduct our experiments: contextual embedding LMs to encode bilingual input sequences and MT models to produce synthetic data required for QE pretraining.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Models",
"sec_num": "4.1"
},
{
"text": "Contextual embedding LMs used in our experiments are based on a pretrained XLM-R checkpoint, namely xlm-roberta-large from the Hug-gingFace Transformers library (Wolf et al., 2020) . This model, initially introduced in (Conneau et al., 2020) , was pretrained on 2.5TB of filtered Com-monCrawl data, covering 100 languages with a vocabulary of 250k BPE tokens (Sennrich et al., 2016) , 1, 024 embedding and hidden-state dimensions, 4, 096-dimensional feed-forward layers and 16 attention heads.",
"cite_spans": [
{
"start": 161,
"end": 180,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF40"
},
{
"start": 219,
"end": 241,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 359,
"end": 382,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Models",
"sec_num": "4.1"
},
{
"text": "MT models used in our experiments are transformer-based neural MT (NMT) models. For two language pairs and translation directions of the Eval4NLP 2021 shared task, namely Estonian\u2192English (ET-EN) and Romanian\u2192English (RO-EN), we used pretrained NMT models made available by the WMT'20 QE shared task organizers . 1 For German\u2192Chinese (DE-ZH) and Russian\u2192German (RU-DE), the two zero-shot pairs of the shared task, we used the mBART50 model (Liu et al., 2020; Tang et al., 2020 ). 2 All NMT models are based on the fairseq library (Ott et al., 2019) .",
"cite_spans": [
{
"start": 440,
"end": 458,
"text": "(Liu et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 459,
"end": 476,
"text": "Tang et al., 2020",
"ref_id": null
},
{
"start": 530,
"end": 548,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Models",
"sec_num": "4.1"
},
{
"text": "Two datasets were used in our experiments: a synthetic dataset for QE pretraining, and the shared task dataset consisting of training, validation and test sets. Details of the latter dataset are presented in Table 1 while we give more information about the synthetic data in this section. Synthetic data generation was based on gathered parallel corpora translated by the NMT systems presented in Section 4.1. The translated sentences were compared to the target side of the parallel corpora to produce sentence-level scores based on chrF (Popovi\u0107, 2016) , TER (Snover et al., 2006) and BLEU (Papineni et al., 2002) metrics. Additionally, only for the synthetic data, we produced token-level scores following the usual procedure to determine post-editing effort . 3 For this step, word alignments were required to obtain source-side token-level quality indicators. We used the same parallel corpora to produce synthetic data and to train word alignments based on the IBM 2 model (Brown et al., 1993) and trained using fast_align (Dyer et al., 2013) . Details about the synthetic data are presented in Table 2 . 4 The special case of DE-ZH resulting from preliminary experiments, we noticed for this language pair that the translation quality of the synthetic data was low compared to the three other language pairs. We assumed that it was due to two issues: the quality of the DE-ZH parallel corpora and the performance of the NMT model. To tackle the first issue, we generated our own DE-ZH parallel corpora by pivot-based (back-) translation, starting from a monolingual Chinese corpus composed of CommonCrawl and NewsCrawl 2018 to 2020, translating it into English using an in-house NMT model trained with Marian (Junczys-Dowmunt et al., 2018) on the WMT'21 QE ZH-EN parallel corpus, then translating the English output into German using the EN-DE NMT model released by the WMT'20 QE shared task organizers, resulting in a synthetic DE-ZH parallel corpus. To tackle the second issue, we finetuned mBART50 using NewsCommentary and MultiUN DE-ZH retrieved from OPUS, as these two corpora appeared to be the cleanest among the available ones.",
"cite_spans": [
{
"start": 539,
"end": 554,
"text": "(Popovi\u0107, 2016)",
"ref_id": "BIBREF22"
},
{
"start": 561,
"end": 582,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF31"
},
{
"start": 592,
"end": 615,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF20"
},
{
"start": 764,
"end": 765,
"text": "3",
"ref_id": null
},
{
"start": 979,
"end": 999,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
},
{
"start": 1029,
"end": 1048,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF5"
},
{
"start": 1111,
"end": 1112,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 208,
"end": 215,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1101,
"end": 1108,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "We detail in this section the training procedures employed for QE pretraining on the synthetic data and finetuning on the officially released training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "4.3"
},
{
"text": "QE pretraining was conducted per language pair starting from the XLM-R checkpoint presented in Section 4.1 using two different random seeds and learning rates. Additionally, four QE pretraining were conducted on the concatenation of all synthetic data using four different random seeds and two learning rates. Training was ran for two epochs for the language specific models and for a single epoch for the remaining ones. We restricted the length of training samples to a minimum of 5 and a maximum of 128 subword tokens for the bilingual models and a maximum of 96 subword tokens for the multilingual ones. Training was conducted with batches of 128 source and target sequences with the AdamW optimizer (Loshchilov and Hutter, 2019) (parameters \u03b2 1 = 0.9, \u03b2 2 = 0.999 and = 1 \u00d7 10 \u22126 ). A linear learning rate warmup was employed during the first 50k updates to reach a maximum value of 5\u00d710 \u22126 or 2\u00d710 \u22126 depending on the model and random seed, which remained without decay until the end of the first epoch. The dropout rates were set to 0.1 for both the embeddings and the transformer blocks (feed-forward and attention layers), the model dimensionality and embedding size was 1, 024, feed-forward layers had a dimensionality of 4, 096 and the numbers of attention heads were set to 16 for the language model and 8 for the metric attention block.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "4.3"
},
{
"text": "Finetuning was conducted using the officially released data presented in Table 1 during 20 epochs, monitoring the performance of each model using the validation set. No length restriction was applied on these corpora. In addition to the three automatic metrics used during QE pretraining, namely chrF, TER and BLEU, the direct assessment scores provided by the shared task organizers were used by simply adding an entry in the metric embeddings matrix. A few hyperparameters, namely the batch size, learning rate, as well as embedding and Transformer dropout rates, were optimized in a grid-search manner. The best resulting models according to token-level source and target performances based on the official metrics (Area Under the Curve, AUC, and Average Precision, AP) were kept for ensembling and predicting scores on the validation and test sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "4.3"
},
{
"text": "We present in Table 3 the results obtained on the Eval4NLP 2021 shared task as reported by the organizers, including our baselines and final submissions along with the three baselines proposed by the organizers, namely random scores, Tran-sQuest (Ranasinghe et al., 2020b) combined with LIME (Ribeiro et al., 2016) (noted Official baseline 1), and XMoverScore (Zhao et al., 2020) combined with SHAP (Lundberg and Lee, 2017) (noted Official baseline 2). Our baselines are composed of ensembles of two finetuned language-specific QE pretrained models while our final submissions are composed of ensembles of eight finetuned models for each of token-level tasks (source and target) and eight models for sentence-level tasks. The eight token-level models are, for each random seed, the best language-specific finetuned models according to source or target AUC and AP, and the best multilingual models according to source or target AUC. The eight sentence-level models are the best direct assessment Pearson's \u03c1 for both bilingual and multilingual models, as well as the best direct assessment RMSE for the bilingual model.",
"cite_spans": [
{
"start": 246,
"end": 272,
"text": "(Ranasinghe et al., 2020b)",
"ref_id": "BIBREF25"
},
{
"start": 360,
"end": 379,
"text": "(Zhao et al., 2020)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 14,
"end": 21,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "Results show that our baselines and final submissions largely outperform the organizer's baselines on the four language pairs, while our final submissions reach higher source and target tokenlevel performances compared to our baselines. Except for the DE-ZH pair, our final submissions are outperforming our baselines on the sentence-level Pearson's \u03c1 evaluation. Because the main objec- tives of the shared task was token-level evaluation, we did not focus on improving the sentence-level scores. We assume that further improvements are achievable on this aspect of QE. Additionally, due to the lack of validation sets for the zero-shot language pairs, we could not try to improve over a baseline and thus only provided a unique and final submission. Note that we did not use the official word-level training data at all, neither during pretraining nor during finetuning of our models. Only the provided validation set was used for monitoring purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "In order to evaluate the impact of QE pretraining and finetuning, as well as the difference in performance between ensemble and single models trained using language pair specific (bilingual) and multilingual datasets, we present an ablation study conducted on the word-level QE in Table 4 for the two non zero-shot language pairs. The results obtained without ensemble models are the average of results from individual models. The ablation study shows that individual models (-Ensemble) are outperformed by the ensemble (Submission), while removing language pair specific training data (-Bilingual) has limited impact on performances for ET-EN, which motivates the use of multilingual pretrained models and ensembling. Comparing removing finetuning and QE pretraining, the latter leads to the largest performance drop while the for- Table 4 : Results of the ablation study on non zero-shot pairs obtained on the validation set for token-level QE. mer has a relatively limited impact. This indicates that large amount of synthetic data combined with our approach performs well even without using any of the provided manually annotated data for the shared task.",
"cite_spans": [],
"ref_spans": [
{
"start": 281,
"end": 288,
"text": "Table 4",
"ref_id": null
},
{
"start": 833,
"end": 840,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "As an explanation of sentence-level scores predicted by our model, we propose to extract the attention weights computed between the metric embeddings and the contextually encoded input sequences. A few samples extracted from the validation set are presented in Figure 3 . We can see on these examples that individual metrics do not correlate with human annotations. However, the multimetric approach, which relies on heads and metrics combination through linear layers, provides a potential error identification method. ",
"cite_spans": [],
"ref_spans": [
{
"start": 261,
"end": 269,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "Since the shift of most NLP tasks towards using large pretrained contextual LMs as basis for taskspecific finetuning, the research community working on QE for MT moved from a classic two-step process of feature engineering followed by machine learning (Blatz et al., 2004; Quirk, 2004; Specia et al., 2009) to an end-to-end training neural-based paradigm. First attempts in this direction were conducted by Kim et al. (2017) with the predictorestimator, which inspired further work in using various types of encoders (Wang et al., 2020) , enriching the model with features extracted from NMT models (Moura et al., 2020; Fomicheva et al., 2020a) or modifying the pretraining objective of contextual LMs for QE adaptation (Rubino and Sumita, 2020) .",
"cite_spans": [
{
"start": 252,
"end": 272,
"text": "(Blatz et al., 2004;",
"ref_id": "BIBREF0"
},
{
"start": 273,
"end": 285,
"text": "Quirk, 2004;",
"ref_id": "BIBREF23"
},
{
"start": 286,
"end": 306,
"text": "Specia et al., 2009)",
"ref_id": "BIBREF33"
},
{
"start": 407,
"end": 424,
"text": "Kim et al. (2017)",
"ref_id": "BIBREF13"
},
{
"start": 517,
"end": 536,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF39"
},
{
"start": 599,
"end": 619,
"text": "(Moura et al., 2020;",
"ref_id": "BIBREF18"
},
{
"start": 620,
"end": 644,
"text": "Fomicheva et al., 2020a)",
"ref_id": "BIBREF7"
},
{
"start": 720,
"end": 745,
"text": "(Rubino and Sumita, 2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "6"
},
{
"text": "More recently, due to the costly nature of data acquisition for supervised learning of QE models, unsupervised approaches were proposed by relying mostly on signals given by NMT systems when translating source sentences (Fomicheva et al., 2020b) , the so-called glass-box features. Alternatively, when the NMT systems which produced data to perform QE on are not available (i.e. blackbox setting), relying on large amount of synthetic data for contextual LM continued training prior to finetuning appears to be an effective way to approximate human judgments of translation quality (Lee, 2020; Tuan et al., 2021) .",
"cite_spans": [
{
"start": 220,
"end": 245,
"text": "(Fomicheva et al., 2020b)",
"ref_id": "BIBREF8"
},
{
"start": 582,
"end": 593,
"text": "(Lee, 2020;",
"ref_id": "BIBREF14"
},
{
"start": 594,
"end": 612,
"text": "Tuan et al., 2021)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "6"
},
{
"text": "This paper presented a novel QE architecture for unsupervised token-level quality prediction providing sentence-level explainable decisions from the model. We implemented a metric embeddings and attention mechanism on top of a widely used pretrained contextual LM, allowing to add metrics during finetuning and enabling high performance QE both at the levels of token and sentence. This extensible framework was shown to produce results on par or outperforming state-of-the-art QE approaches without relying on human-produced token-level annotations, which could be approximated with the use of relatively cost-effective synthetic data and automatic metrics. Our pivot-based translation approach also tackled a recurrent issue in MT when parallel data are scarce and final results for zero-shot language pairs validated this method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Scripts and procedure available at https://github. com/deep-spin/qe-corpus-builder 4 Parallel corpora were collected from the WMT news translation task(Tiedemann, 2016) and OPUS(Tiedemann, 2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the reviewers for their insightful comments and suggestions. A part of this work was conducted under the commissioned research program \"Research and Development of Advanced Multilingual Translation Technology\" in the \"R&D Project for Information and Communications Technology (JPMI00316)\" of the Ministry of Internal Affairs and Communications (MIC), Japan, and supported by JSPS KAKENHI grant numbers 20K19879 and 19H05660.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Confidence Estimation for Machine Translation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blatz",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Simona",
"middle": [],
"last": "Gandrabur",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kulesza",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "315--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto San- chis, and Nicola Ueffing. 2004. Confidence Esti- mation for Machine Translation. In Proceedings of the 20th International Conference on Computational Linguistics, pages 315-321. International Commit- tee on Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational linguistics",
"authors": [
{
"first": "Stephen A Della",
"middle": [],
"last": "Peter F Brown",
"suffix": ""
},
{
"first": "Vincent J Della",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert L",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "19",
"issue": "",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F Brown, Stephen A Della Pietra, Vincent J Della Pietra, and Robert L Mercer. 1993. The Mathematics of Statistical Machine Translation: Pa- rameter Estimation. Computational linguistics, 19(2):263-311.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised Cross-lingual Representation Learning at Scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised Cross-lingual Representation Learning at Scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Crosslingual Language Model Pretraining",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "7057--7067",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual Language Model Pretraining. In Advances in Neural Information Processing Systems, pages 7057-7067. Curran Associates, Inc.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 4171-4186. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Simple, Fast, and Effective Reparameterization of IBM Model 2",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Chahuneau",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "644--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A Simple, Fast, and Effective Reparameter- ization of IBM Model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 644-648. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The eval4nlp shared task on explainable quality estimation: Overview and results",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Piyawat",
"middle": [],
"last": "Lertvittayakumjorn",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Piyawat Lertvittayakumjorn, Wei Zhao, Steffen Eger, and Yang Gao. 2021. The eval4nlp shared task on explainable quality estima- tion: Overview and results. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERGAMOT-LATTE Submissions for the WMT20 Quality Estimation Shared Task",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "1010--1017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Fr\u00e9d\u00e9ric Blain, Vishrav Chaudhary, Mark Fishel, Francisco Guzm\u00e1n, and Lucia Specia. 2020a. BERGAMOT-LATTE Submissions for the WMT20 Quality Estimation Shared Task. In Proceedings of the Fifth Conference on Machine Translation, pages 1010-1017. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unsupervised Quality Estimation for Neural Machine Translation",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Aletras",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "539--555",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00330"
]
},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Fr\u00e9d\u00e9ric Blain, Francisco Guzm\u00e1n, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020b. Unsupervised Quality Estimation for Neural Machine Translation. Transactions of the As- sociation for Computational Linguistics, 8:539-555.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Findings of the WMT 2019 Shared Tasks on Quality Estimation",
"authors": [
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Federmann",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erick Fonseca, Lisa Yankovskaya, Andr\u00e9 F. T. Martins, Mark Fishel, and Christian Federmann. 2019. Find- ings of the WMT 2019 Shared Tasks on Quality Esti- mation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 1-12, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Continuous Measurement Scales in Human Evaluation of Machine Translation",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Alistair",
"middle": [],
"last": "Moffat",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Zobel",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse",
"volume": "",
"issue": "",
"pages": "33--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous Measurement Scales in Human Evaluation of Machine Transla- tion. In Proceedings of the 7th Linguistic Annota- tion Workshop and Interoperability with Discourse, pages 33-41. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Marian: Fast Neural Machine Translation in C++",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Tomasz",
"middle": [],
"last": "Dwojak",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Neckermann",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Seide",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "Alham",
"middle": [],
"last": "Fikri Aji",
"suffix": ""
},
{
"first": "Nikolay",
"middle": [],
"last": "Bogoychev",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL 2018, System Demonstrations",
"volume": "",
"issue": "",
"pages": "116--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Al- ham Fikri Aji, Nikolay Bogoychev, et al. 2018. Mar- ian: Fast Neural Machine Translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116-121. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Unbabel's Participation in the WMT19 Translation Quality Estimation Shared Task",
"authors": [
{
"first": "F\u00e1bio",
"middle": [],
"last": "Kepler",
"suffix": ""
},
{
"first": "Jonay",
"middle": [],
"last": "Tr\u00e9nous",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vera",
"suffix": ""
},
{
"first": "Ant\u00f3nio",
"middle": [],
"last": "G\u00f3is",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Amin Farajian",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Ant\u00f3nio",
"suffix": ""
},
{
"first": "Andr\u00e9 Ft",
"middle": [],
"last": "Lopes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "80--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F\u00e1bio Kepler, Jonay Tr\u00e9nous, Marcos Treviso, Miguel Vera, Ant\u00f3nio G\u00f3is, M Amin Farajian, Ant\u00f3nio V Lopes, and Andr\u00e9 FT Martins. 2019. Unbabel's Participation in the WMT19 Translation Quality Estimation Shared Task. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 80-86. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Predictor-Estimator using Multilevel Task Learning with Stack Propagation for Neural Quality Estimation",
"authors": [
{
"first": "Hyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jong-Hyeok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Seung-Hoon",
"middle": [],
"last": "Na",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "562--568",
"other_ids": {
"DOI": [
"10.18653/v1/W17-4763"
]
},
"num": null,
"urls": [],
"raw_text": "Hyun Kim, Jong-Hyeok Lee, and Seung-Hoon Na. 2017. Predictor-Estimator using Multilevel Task Learning with Stack Propagation for Neural Quality Estimation. In Proceedings of the Second Confer- ence on Machine Translation, pages 562-568. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Two-Phase Cross-Lingual Language Model Fine-Tuning for Machine Translation Quality Estimation",
"authors": [
{
"first": "Dongjun",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "1024--1028",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongjun Lee. 2020. Two-Phase Cross-Lingual Lan- guage Model Fine-Tuning for Machine Translation Quality Estimation. In Proceedings of the Fifth Con- ference on Machine Translation, pages 1024-1028. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multilingual Denoising Pre-training for Neural Machine Translation. Transactions of the Association for",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "726--742",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00343"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual Denoising Pre-training for Neural Machine Translation. Trans- actions of the Association for Computational Lin- guistics, 8:726-742.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Decoupled Weight Decay Regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Seventh International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization. In Proceedings of the Seventh International Conference on Learning Rep- resentations.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A Unified Approach to Interpreting Model Predictions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Su-In",
"middle": [],
"last": "Lundberg",
"suffix": ""
},
{
"first": ";",
"middle": [
"I"
],
"last": "Lee",
"suffix": ""
},
{
"first": "U",
"middle": [
"V"
],
"last": "Guyon",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Luxburg",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vishwanathan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Garnett",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "4765--4774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4765-4774. Curran Associates, Inc.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "IST-Unbabel Participation in the WMT20 Quality Estimation Shared Task",
"authors": [
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Moura",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Daan Van Stigt",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Kepler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "1029--1036",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jo\u00e3o Moura, miguel vera, Daan van Stigt, Fabio Kepler, and Andr\u00e9 F. T. Martins. 2020. IST-Unbabel Par- ticipation in the WMT20 Quality Estimation Shared Task. In Proceedings of the Fifth Conference on Machine Translation, pages 1029-1036. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "fairseq: A Fast, Extensible Toolkit for Sequence Modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A Fast, Extensible Toolkit for Sequence Modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguis- tics (Demonstrations), pages 48-53. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "BLEU: A Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of ACL, pages 311-318. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "PyTorch: An Imperative Style, High-Performance Deep Learning Library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "8026--8037",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Ad- vances in Neural Information Processing Systems, pages 8026-8037. Curran Associates, Inc.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "chrF Deconstructed: \u03b2 Parameters and n-gram Weights",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "499--504",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107. 2016. chrF Deconstructed: \u03b2 Parame- ters and n-gram Weights. In Proceedings of the First Conference on Machine Translation, pages 499-504. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Training a Sentence-Level Machine Translation Confidence Measure",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Quirk",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "825--828",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Quirk. 2004. Training a Sentence-Level Machine Translation Confidence Measure. In Pro- ceedings of the Fourth International Conference on Language Resources and Evaluation, pages 825- 828. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "TransQuest at WMT2020: Sentence-Level Direct Assessment",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "1049--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Rus- lan Mitkov. 2020a. TransQuest at WMT2020: Sentence-Level Direct Assessment. In Proceedings of the Fifth Conference on Machine Translation, pages 1049-1055. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "TransQuest: Translation Quality Estimation with Cross-lingual Transformers",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5070--5081",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020b. TransQuest: Translation Quality Es- timation with Cross-lingual Transformers. In Pro- ceedings of the 28th International Conference on Computational Linguistics, pages 5070-5081. Inter- national Committee on Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Zero-shot sequence labeling: Transferring knowledge from sentences to tokens",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "293--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marek Rei and Anders S\u00f8gaard. 2018. Zero-shot se- quence labeling: Transferring knowledge from sen- tences to tokens. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 293-302. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Explaining the Predictions of Any Classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {
"DOI": [
"10.1145/2939672.2939778"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"Why Should I Trust You?\" Ex- plaining the Predictions of Any Classifier. In Pro- ceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing, pages 1135-1144. Association for Computing Machinery.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "NICT Kyoto Submission for the WMT'20 Quality Estimation Task: Intermediate Training for Domain and Task Adaptation",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "1042--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Rubino. 2020. NICT Kyoto Submission for the WMT'20 Quality Estimation Task: Intermediate Training for Domain and Task Adaptation. In Pro- ceedings of the Fifth Conference on Machine Trans- lation, pages 1042-1048. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Intermediate Self-supervised Learning for Machine Translation Quality Estimation",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4355--4360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Rubino and Eiichiro Sumita. 2020. Interme- diate Self-supervised Learning for Machine Transla- tion Quality Estimation. In Proceedings of the 28th International Conference on Computational Linguis- tics, pages 4355-4360. International Committee on Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Neural Machine Translation of Rare Words with Subword Units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A Study of Translation Edit Rate with Targeted Human Annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers",
"volume": "",
"issue": "",
"pages": "223--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annota- tion. In Proceedings of the 7th Conference of the Association for Machine Translation in the Ameri- cas: Technical Papers, pages 223-231. Association for Machine Translation in the Americas.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Findings of the WMT 2020 Shared Task on Quality Estimation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "743--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Marina Fomicheva, Er- ick Fonseca, Vishrav Chaudhary, Francisco Guzm\u00e1n, and Andr\u00e9 F. T. Martins. 2020. Findings of the WMT 2020 Shared Task on Quality Estimation. In Proceedings of the Fifth Conference on Machine Translation, pages 743-764. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Estimating the Sentence-Level Quality of Machine Translation Systems",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Cancedda",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Dymetman",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 13th Annual conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "28--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Nicola Cancedda, Marc Dymetman, Marco Turchi, and Nello Cristianini. 2009. Estimat- ing the Sentence-Level Quality of Machine Trans- lation Systems. In Proceedings of the 13th Annual conference of the European Association for Machine Translation, pages 28-35. European Association for Machine Translation.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Are we Estimating or Guesstimating Translation Quality?",
"authors": [
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6262--6267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuo Sun, Francisco Guzm\u00e1n, and Lucia Specia. 2020. Are we Estimating or Guesstimating Translation Quality? In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguis- tics, pages 6262-6267. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "and Angela Fan. 2020. Multilingual Translation with Extensible Multilingual Pretraining and Finetuning",
"authors": [
{
"first": "Yuqing",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Chau",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peng-Jen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.00401"
]
},
"num": null,
"urls": [],
"raw_text": "Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Na- man Goyal, Vishrav Chaudhary, Jiatao Gu, and An- gela Fan. 2020. Multilingual Translation with Exten- sible Multilingual Pretraining and Finetuning. arXiv preprint arXiv:2008.00401.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "OPUS -Parallel Corpora for Everyone",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 19th Annual Conference of the European Association for Machine Translation: Projects/Products",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2016. OPUS -Parallel Corpora for Everyone. In Proceedings of the 19th Annual Con- ference of the European Association for Machine Translation: Projects/Products. Baltic Journal of Modern Computing.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Quality Estimation without Human-labeled Data",
"authors": [
{
"first": "Yi-Lin",
"middle": [],
"last": "Tuan",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "El-Kishky",
"suffix": ""
},
{
"first": "Adithya",
"middle": [],
"last": "Renduchintala",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "619--625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi-Lin Tuan, Ahmed El-Kishky, Adithya Renduchin- tala, Vishrav Chaudhary, Francisco Guzm\u00e1n, and Lucia Specia. 2021. Quality Estimation without Human-labeled Data. In Proceedings of the 16th Conference of the European Chapter of the Associ- ation for Computational Linguistics: Main Volume, pages 619-625. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Attention is All You Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In Advances in Neural Information Processing Systems, pages 5998-6008. Curran As- sociates, Inc.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "HW-TSC's Participation at WMT 2020 Quality Estimation Shared Task",
"authors": [
{
"first": "Minghan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Hengchao",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Daimeng",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Jiaxin",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Lizhi",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Shimin",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Shiliang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yimeng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "1056--1061",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minghan Wang, Hao Yang, Hengchao Shang, Daimeng Wei, Jiaxin Guo, Lizhi Lei, Ying Qin, Shimin Tao, Shiliang Sun, Yimeng Chen, et al. 2020. HW- TSC's Participation at WMT 2020 Quality Estima- tion Shared Task. In Proceedings of the Fifth Con- ference on Machine Translation, pages 1056-1061. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Transformers: State-of-the-Art Natural Language Processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-Art Natural Language Process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "On the Limitations of Cross-lingual Encoders as Exposed by Reference-Free Machine Translation Evaluation",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Maxime",
"middle": [],
"last": "Peyrard",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "West",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1656--1671",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.151"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Goran Glava\u0161, Maxime Peyrard, Yang Gao, Robert West, and Steffen Eger. 2020. On the Lim- itations of Cross-lingual Encoders as Exposed by Reference-Free Machine Translation Evaluation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1656- 1671. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "W s and b s are trainable parameters of the linear output layer, \u03c6 is a pooling function, and \u03c3 is the sigmoid function. The output y s of this QE component is a score indicating the sentence-level translation quality.",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "Attention weights computed between individual metric embeddings, namely DA, TER, chrF and BLEU, along with the multimetric approach (see eqn. 6) and the human annotations (noted gold). Samples extracted from the ET-EN and RO-EN validation sets (top and bottom respectively).",
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td colspan=\"4\">: Official training, validation and test data re-</td></tr><tr><td colspan=\"4\">leased for the Eval4NLP 2021 shared task. DE-ZH</td></tr><tr><td colspan=\"4\">and RU-DE are zero-shot language pairs thus have nei-</td></tr><tr><td colspan=\"4\">ther training nor validation corpora. Tokens and types</td></tr><tr><td colspan=\"4\">columns contain source / MT counts, k stands for thou-</td></tr><tr><td colspan=\"4\">sands, Chinese tokens and types are characters.</td></tr><tr><td>Lang.</td><td>Sent.</td><td>Tokens</td><td>Types</td></tr><tr><td colspan=\"4\">ET-EN 24.9M 322.5M / 411.0M 4.8M / 2.8M</td></tr><tr><td colspan=\"4\">RO-EN 42.1M 600.5M / 601.2M 4.0M / 3.6M</td></tr><tr><td colspan=\"3\">DE-ZH 19.8M 422.8M / 708.1M</td><td>4.5M / 3.3k</td></tr><tr><td colspan=\"4\">RU-DE 19.5M 256.9M / 262.7M 4.4M / 4.4M</td></tr></table>",
"text": "",
"type_str": "table",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table/>",
"text": "Synthetic data produced for QE pretraining. Tokens and types columns contain source / MT counts, M stands for millions and k for thousands, Chinese tokens and types are characters.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table/>",
"text": "Official test results of the Eval4NLP 2021 shared task, according to three metrics for source and target token-level QE (AUC, AP and Recall at top-K), and one metric for sentence-level QE (Pearson's \u03c1).",
"type_str": "table",
"num": null,
"html": null
}
}
}
}