ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2021.repl4nlp-1.23.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:58:22.100511Z"
},
"title": "Bayesian Model-Agnostic Meta-Learning with Matrix-Valued Kernels for Quality Estimation",
"authors": [
{
"first": "Abiola",
"middle": [],
"last": "Obamuyide",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most current quality estimation (QE) models for machine translation are trained and evaluated in a fully supervised setting requiring significant quantities of labelled training data. However, obtaining labelled data can be both expensive and time-consuming. In addition, the test data that a deployed QE model would be exposed to may differ from its training data in significant ways. In particular, training samples are often labelled by one or a small set of annotators, whose perceptions of translation quality and needs may differ substantially from those of end-users, who will employ predictions in practice. Thus, it is desirable to be able to adapt QE models efficiently to new user data with limited supervision data. To address these challenges, we propose a Bayesian meta-learning approach for adapting QE models to the needs and preferences of each user with limited supervision. To enhance performance, we further propose an extension to a state-of-the-art Bayesian meta-learning approach which utilizes a matrix-valued kernel for Bayesian meta-learning of quality estimation. Experiments on data with varying number of users and language characteristics demonstrates that the proposed Bayesian metalearning approach delivers improved predictive performance in both limited and full supervision settings.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Most current quality estimation (QE) models for machine translation are trained and evaluated in a fully supervised setting requiring significant quantities of labelled training data. However, obtaining labelled data can be both expensive and time-consuming. In addition, the test data that a deployed QE model would be exposed to may differ from its training data in significant ways. In particular, training samples are often labelled by one or a small set of annotators, whose perceptions of translation quality and needs may differ substantially from those of end-users, who will employ predictions in practice. Thus, it is desirable to be able to adapt QE models efficiently to new user data with limited supervision data. To address these challenges, we propose a Bayesian meta-learning approach for adapting QE models to the needs and preferences of each user with limited supervision. To enhance performance, we further propose an extension to a state-of-the-art Bayesian meta-learning approach which utilizes a matrix-valued kernel for Bayesian meta-learning of quality estimation. Experiments on data with varying number of users and language characteristics demonstrates that the proposed Bayesian metalearning approach delivers improved predictive performance in both limited and full supervision settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Quality Estimation (QE) models aim to evaluate the output of Machine Translation (MT) systems at run-time, when no reference translations are available (Blatz et al., 2004; Specia et al., 2009) . QE models can be applied for instance to improve translation productivity by selecting high-quality translations amongst several candidates. A number of approaches have been proposed for this task (Specia et al., 2009 (Specia et al., , 2015 Kepler et al., 2019; Ranasinghe et al., 2020) , and a shared task yearly benchmarks proposed approaches (Fonseca et al., 2019; Specia et al., 2020) .",
"cite_spans": [
{
"start": 152,
"end": 172,
"text": "(Blatz et al., 2004;",
"ref_id": "BIBREF0"
},
{
"start": 173,
"end": 193,
"text": "Specia et al., 2009)",
"ref_id": "BIBREF25"
},
{
"start": 393,
"end": 413,
"text": "(Specia et al., 2009",
"ref_id": "BIBREF25"
},
{
"start": 414,
"end": 436,
"text": "(Specia et al., , 2015",
"ref_id": "BIBREF24"
},
{
"start": 437,
"end": 457,
"text": "Kepler et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 458,
"end": 482,
"text": "Ranasinghe et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 541,
"end": 563,
"text": "(Fonseca et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 564,
"end": 584,
"text": "Specia et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Different users of MT output have varying quality needs and standards, depending for instance on the downstream task at hand, or the level of their knowledge of the languages involved. Thus, the perception of the quality of MT output can be subjective, and therefore the quality estimates obtained from a model trained on data from one set of users may not serve the needs of a different set of users. In order to be able to make the most of these models, it is thus desirable to be able to efficiently adapt them to the needs and preferences of the end-user and with as little supervision as possible. However, most existing QE models are trained and evaluated in a fully supervised setting which assumes access to substantial quantities of labelled supervision data, which may not be available and can be expensive and time-consuming to obtain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to endow QE models with the ability to learn to adapt efficiently with limited supervision data, this work proposes a Bayesian meta-learning framework for the training and evaluation of QE models that are able to adapt to the needs of endusers with limited supervision data. We further improve the performance of Bayesian meta-learning for the task of quality estimation by extending the state-of-the-art Bayesian Model-Agnostic Meta-Learning (BMAML) approach of Kim et al. (2018) to utilize Stein Variational Gradient Descent (Liu and Wang, 2016) with matrix-valued kernels (Wang et al., 2019) , and demonstrate that this leads to enhanced predictive performance in both limited and full supervision settings.",
"cite_spans": [
{
"start": 472,
"end": 489,
"text": "Kim et al. (2018)",
"ref_id": "BIBREF10"
},
{
"start": 536,
"end": 556,
"text": "(Liu and Wang, 2016)",
"ref_id": "BIBREF11"
},
{
"start": 584,
"end": 603,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of meta-learning, also known as learning to learn (Schmidhuber, 1987; Thrun and Pratt, 1998) , is to develop models that can learn more efficiently over time, by generalizing from knowledge of how to solve related tasks from a given distribution of tasks. Given a learner model f w , for instance a neural network parametrized by w \u2208 R d , and a distribution p(T ) over tasks T , gradientbased model-agnostic meta-learning approaches such as MAML (Finn et al., 2017) seek to learn the parameters of the learner model which can be quickly adapted to new tasks sampled from the same distribution of tasks with limited supervision data.",
"cite_spans": [
{
"start": 59,
"end": 78,
"text": "(Schmidhuber, 1987;",
"ref_id": "BIBREF20"
},
{
"start": 79,
"end": 101,
"text": "Thrun and Pratt, 1998)",
"ref_id": "BIBREF26"
},
{
"start": 456,
"end": 475,
"text": "(Finn et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model-Agnostic Meta-Learning",
"sec_num": "2.1"
},
{
"text": "In formal terms, these approaches seek parameters w that satisfy the meta-objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model-Agnostic Meta-Learning",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min w E T \u223cp(T ) [L T (U k (w; D T ))] ,",
"eq_num": "(1)"
}
],
"section": "Model-Agnostic Meta-Learning",
"sec_num": "2.1"
},
{
"text": "where L T is the loss and D T is training data from task T , and U k denotes k steps of a gradient descent learning rule such as SGD. Intuitively, the meta-objective explicitly encourages the model to learn model parameters that can be quickly adapted to achieve optimum predictive performance across all tasks using limited supervision data and with as few gradient descent steps as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model-Agnostic Meta-Learning",
"sec_num": "2.1"
},
{
"text": "In order to account for uncertainty and improve robustness, Bayesian approaches to meta-learning have also been proposed (Kim et al., 2018; Finn et al., 2018; Ravi and Beatson, 2019; Wang et al., 2020; Nguyen et al., 2020) . In contrast to their non-Bayesian counterparts which learn point estimates of the parameters, Bayesian meta-learning approaches learn a distribution over the parameters to further improve robustness in limited supervision settings.",
"cite_spans": [
{
"start": 121,
"end": 139,
"text": "(Kim et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 140,
"end": 158,
"text": "Finn et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 159,
"end": 182,
"text": "Ravi and Beatson, 2019;",
"ref_id": "BIBREF18"
},
{
"start": 183,
"end": 201,
"text": "Wang et al., 2020;",
"ref_id": "BIBREF28"
},
{
"start": 202,
"end": 222,
"text": "Nguyen et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model-Agnostic Meta-Learning",
"sec_num": "2.1"
},
{
"text": "Stein Variational Gradient Descent (SVGD) (Liu and Wang, 2016 ) is a Bayesian inference method which works by initializing a set of samples, also known as particles, from a simple distribution and iteratively updating the particles to match samples from a target distribution. Because its particle update rule is deterministic and differentiable, it can be used to perform Bayesian inference in the metalearning inner loop, since the entire update process can still be differentiated through for gradientbased updates from the outer loop, for instance as was done in Kim et al. (2018) .",
"cite_spans": [
{
"start": 42,
"end": 61,
"text": "(Liu and Wang, 2016",
"ref_id": "BIBREF11"
},
{
"start": 567,
"end": 584,
"text": "Kim et al. (2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stein Variational Gradient Descent",
"sec_num": "2.2"
},
{
"text": "In order to obtain N samples from a posterior p(w), SVGD maintains N samples of model parameters, and iteratively transports the samples to match samples from the target distribution. Let the samples be represented by W = {w n } N n=1 . At each successive iteration t, SVGD updates each sample with the following update rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stein Variational Gradient Descent",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "wt+1 \u2190 wt + \u03b1t\u03c6 (wt) ,",
"eq_num": "(2)"
}
],
"section": "Stein Variational Gradient Descent",
"sec_num": "2.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stein Variational Gradient Descent",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c6 (wt) = 1 N N n=1 k (w n t , wt) \u2207 w n t log p (w n t ) + \u2207 w n t k (w n t , wt) ,",
"eq_num": "(3)"
}
],
"section": "Stein Variational Gradient Descent",
"sec_num": "2.2"
},
{
"text": "\u03b1 t is a step-size parameter and k :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stein Variational Gradient Descent",
"sec_num": "2.2"
},
{
"text": "R d \u00d7 R d \u2192 R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stein Variational Gradient Descent",
"sec_num": "2.2"
},
{
"text": "is a scalar-valued positive-definite kernel such as the Radial Basis Function (RBF) kernel. Intuitively, the first term in Equation 3 implies that a particle determines its update direction through a weighted aggregate of the gradients from the other particles, with the kernel distance between the particles serving as the weight. Thus, closer particles have more weight in the aggregate. The second term of the equation can be understood as a repulsive force that prevents the particles from collapsing to a single point. For the case when the number of particles is one, the SVGD update procedure reduces to standard gradient ascent on the objective p(w) for any kernel with the property \u2207 w k (w, w) = 0, such as the RBF kernel. SVGD has been applied in a wide range of settings, including reinforcement learning (Liu et al., 2017; Haarnoja et al., 2017) , uncertainty quantification (Zhu and Zabaras, 2018) , and online continual learning (Obamuyide et al., 2021) . with a more general RKHS of vector-valued functions (also known as vector-valued RKHS), which uses matrix-valued positive-definite kernels to specify rich correlation structures between the different co-ordinates. Concretely, Equation 3 as used in SVGD is replaced with Equation 4:",
"cite_spans": [
{
"start": 817,
"end": 835,
"text": "(Liu et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 836,
"end": 858,
"text": "Haarnoja et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 888,
"end": 911,
"text": "(Zhu and Zabaras, 2018)",
"ref_id": "BIBREF29"
},
{
"start": 944,
"end": 968,
"text": "(Obamuyide et al., 2021)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stein Variational Gradient Descent",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c6 (wt) = 1 N N n=1 K (wt, w n t ) \u2207 w n t log p (w n t ) + K (wt, w n t ) \u2207 w n t ,",
"eq_num": "(4)"
}
],
"section": "Stein Variational",
"sec_num": "2.3"
},
{
"text": "where K :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stein Variational",
"sec_num": "2.3"
},
{
"text": "R d \u00d7 R d \u2192 R d\u00d7d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stein Variational",
"sec_num": "2.3"
},
{
"text": "is now a matrixvalued kernel, and K(\u2022, w)\u2207 w is formally defined as the product of matrix K(\u2022, w) with vector \u2207 w . The -th element of K(\u2022, w)\u2207 w is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stein Variational",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(K(\u2022, w)\u2207w) = d m=1 \u2207wm K ,m (\u2022, w),",
"eq_num": "(5)"
}
],
"section": "Stein Variational",
"sec_num": "2.3"
},
{
"text": "where K ,m (w, w ) represents the ( , m)-element of matrix K (w, w ) and w m the m-element of w. Importantly, the advantage of Matrix-SVGD over the original SVGD algorithm is that it allows us to pre-condition SVGD by constructing a proper matrix kernel which incorporates the pre-conditioning information, in order to accelerate exploration and convergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stein Variational",
"sec_num": "2.3"
},
{
"text": "Kim et al. 2018proposed a Bayesian Model-Agnostic Meta-Learning (BMAML) algorithm which learns a distribution over parameters which, when given data from a new task, can be adapted quickly to a task-specific distribution using SVGD updates as defined in Equation 3. Thus, BMAML as proposed in Kim et al. (2018) makes use of scalar-valued kernels for SVGD updates, which (as discussed earlier) does not allow the encoding of potential correlations between different parameter co-ordinates for effective optimization, a limitation which we next address.",
"cite_spans": [
{
"start": 293,
"end": 310,
"text": "Kim et al. (2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Model-Agnostic Meta-Learning",
"sec_num": "2.4"
},
{
"text": "Meta-Learning with Matrix-SVGD",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "In this work we propose to improve the predictive performance of BMAML for quality estimation with the use of the Matrix-SVGD, which uses matrix-valued kernels for more effective parameter updates, in place of the original SVGD algorithm used in Kim et al. (2018) . As pre-conditioning information, we use P , the average of the Fisher information matrix of the particles:",
"cite_spans": [
{
"start": 246,
"end": 263,
"text": "Kim et al. (2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P = 1 N N n=1 F (wn) ,",
"eq_num": "(6)"
}
],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "where F (wn) is the Fisher information matrix for particle w n . The matrix-valued kernel is then computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "K P w, w = P \u22121 exp \u2212 1 2h w \u2212 w 2 P ,",
"eq_num": "(7)"
}
],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "where w \u2212 w 2 P := (w \u2212 w ) P (w \u2212 w ) and h is a bandwidth parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "The full algorithm, which we refer to as Matrix-BMAML, is outlined in Algorithm 1. We use machine translation quality estimation as a case study in this work, and so assume access to a distribution of quality estimation tasks p(T ) (each QE task can be a QE user/annotator/post-editor with their corresponding data), and a quality estimation model f W parameterized by W , though the approach can also be applied to other natural language processing or computer vision tasks. for each Ti do 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "Sample",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "D train T i from T train i 6: Sample D val T i from T val i 7: W i 0 \u2190 W 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "for k = 1,..K do 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "W i k = Matrix-SVGD(W i k\u22121 ; D train T i , \u03b1) 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "end for 11: end for 12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "W \u2190 W \u2212 \u03b2\u2207W T i \u223cp(T ) L f W i K ; D val T i 13: end while",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "We first initialize the parameters of the quality estimation model (line 1). Then in each iteration, we sample a batch of QE tasks (line 3), and for each QE task, we sample instances from its training and validation sets (lines 4-6). Thereafter, task-specific parameters are initialized from the model's parameters (line 7), and then updated with K steps of Matrix-SVGD (using Equations (2) and (4) to (7)) (lines 8-10). At the end of each iteration, a metaupdate is performed on the model's parameters W .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Model-Agnostic",
"sec_num": "3"
},
{
"text": "We conduct experiments in two settings: in a limited supervision setting, where we provide all models access to only a limited number of training instances per QE task; and in a full-supervision setting, where we provide the models with access to all available training instances for each QE task. The QT21 Dataset We evaluate our approach with the publicly available QT21 (Specia et al., 2017) , a large-scale dataset containing translations from both statistical (smt) and neural (nmt) machine translation systems in multiple language directions 1 . This is the largest dataset with annotator information available. We make use of data from the English-Latvian (en-lv) and English-Czech (encs) language directions. The languages were chosen as they contain the largest number of annotators. Each instance in the dataset is a tuple of source sentence, its machine translation, the corresponding post-edited translation by a professional translator (post-editor), a reference translation and other information such as (anonymized) post-editor identifier. We construct a QE dataset from this corpus by computing the HTER (Snover et al., 2006) values between each source sentence and its post-edited translation. We thereafter split the data into train, dev and test splits for each post-editor, which constitutes a QE task. A breakdown of the number of train, dev and test instances per QE task/post-editor is available in Table 1 .",
"cite_spans": [
{
"start": 373,
"end": 394,
"text": "(Specia et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 1120,
"end": 1141,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 1422,
"end": 1429,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "The quality estimation model used by all methods is based on multi-lingual DistilBERT (Sanh et al., 2019) , a smaller version of multi-lingual BERT (Devlin et al., 2019) trained with knowledge distillation (Bucilu\u01ce et al., 2006; Hinton et al., 2015) . It accepts as input the source and machine translation outputs concatenated as a single text, separated by a '[SEP]' token and prepended with a '[CLS]' token. The representation of the '[CLS]' token is then passed to a linear layer to predict HTER (Snover et al., 2006) values as regression targets.",
"cite_spans": [
{
"start": 86,
"end": 105,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 148,
"end": 169,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 206,
"end": 228,
"text": "(Bucilu\u01ce et al., 2006;",
"ref_id": "BIBREF1"
},
{
"start": 229,
"end": 249,
"text": "Hinton et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 500,
"end": 521,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QE Model",
"sec_num": "5"
},
{
"text": "Benchmark Approaches We compare the proposed approach with the following: MTL-PRETRAIN is a baseline trained in classic multitask fashion for multiple epochs using data from all QE tasks. It is thereafter fine-tuned using each QE task's training data before making predictions on its test set, in a similar fashion as the meta-learning approaches; REPTILE (Nichol and Schulman, 2018) ; Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017) ; implicit Model-Agnostic Meta-Learning (iMAML) (Rajeswaran et al., 2019) ; Amortized Bayesian Meta-Learning (ABML) (Ravi and Beatson, 2019) ; and BMAML (Kim et al., 2018) , a state-of-the-art Bayesian meta-learning method.",
"cite_spans": [
{
"start": 356,
"end": 383,
"text": "(Nichol and Schulman, 2018)",
"ref_id": "BIBREF14"
},
{
"start": 422,
"end": 441,
"text": "(Finn et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 490,
"end": 515,
"text": "(Rajeswaran et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 558,
"end": 582,
"text": "(Ravi and Beatson, 2019)",
"ref_id": "BIBREF18"
},
{
"start": 595,
"end": 613,
"text": "(Kim et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QE Model",
"sec_num": "5"
},
{
"text": "Evaluation We report Pearson's r correlation scores and Mean Absolute Error (MAE) between model output and gold labels, both standard evaluation metrics in QE. Each experiment is repeated across five (5) different random seeds, and we report the average.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "QE Model",
"sec_num": "5"
},
{
"text": "Results obtained in a setting where all approaches have access to only very limited training instances is presented in Figure 1 . As expected, training with classic multi-task learning and then finetuning on the training data of each QE task (MTL-PRETRAIN) results in very poor performance on both datasets. This result is consistent with the results observed in Finn et al. (2017) , since classic multi-task learning does not have any explicit objective that encourages the model to learn how to learn with limited supervision data. In contrast, all meta-learning approaches obtain consistent improvements over the MTL-PRETRAIN baseline. We find that in general, our approach (Matrix-BMAML) obtains marked performance improvements over the other Bayesian and non-Bayesian meta-learning approaches. This demonstrates the importance of incorporating pre-conditioning information through matrix-valued kernels for more ef- fective SVGD updates in Bayesian model-agnostic meta-learning. Table 2 presents results obtained when the approaches are given access to all available training data for each QE task. We can observe that Matrix-BMAML obtained the best MAE on the encs dataset, and the best Pearson's correlation on both datasets, which again demonstrates the effectiveness of our approach in this setting.",
"cite_spans": [
{
"start": 363,
"end": 381,
"text": "Finn et al. (2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 119,
"end": 127,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 984,
"end": 991,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Limited Supervision Results",
"sec_num": "5.1"
},
{
"text": "We proposed a Bayesian meta-learning framework for adapting machine translation quality estimation models to the quality needs and preferences of each user with limited supervision data. We further extend a state-of-the-art Bayesian metalearning method with the use of matrix-valued kernels, which enables the incorporation of preconditioning information for more effective SVGD updates. Using data from two language directions, we demonstrate improved predictive performance in both limited and full-supervision settings over recent state-of-the-art Bayesian and non-Bayesian meta-learning methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "http://www.qt21.eu/resources/data/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/distilbert-base-multilingualcased 3 https://www.scipy.org 4 https://scikit-learn.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by funding from the Bergamot project (EU H2020 grant no. 825303).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "All compared approaches have a run time of about two hours on average. Each model was implemented as a linear layer on top of multilingual DistilBERT (Sanh et al., 2019) , which has a total of 134M parameters. 2 For the evaluation metrics, Pearson r correlation and MAE, we use open-source implementations available in SciPy 3 and scikit-learn 4 libraries respectively.All models make use of the same values for hyper-parameters such as learning rate and batch size, selected by manual search in initial experiments. These are provided in Table 3. ",
"cite_spans": [
{
"start": 150,
"end": 169,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 210,
"end": 211,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 539,
"end": 547,
"text": "Table 3.",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Confidence estimation for machine translation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blatz",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Simona",
"middle": [],
"last": "Gandrabur",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kulesza",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "315--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto San- chis, and Nicola Ueffing. 2004. Confidence esti- mation for machine translation. In COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pages 315-321, Geneva, Switzerland.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Model compression",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Bucilu\u01ce",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
},
{
"first": "Alexandru",
"middle": [],
"last": "Niculescu-Mizil",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "535--541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristian Bucilu\u01ce, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Pro- ceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data min- ing, pages 535-541.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Model-agnostic meta-learning for fast adaptation of deep networks",
"authors": [
{
"first": "Chelsea",
"middle": [],
"last": "Finn",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Abbeel",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1126--1135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th Inter- national Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Re- search, pages 1126-1135. PMLR.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Probabilistic model-agnostic meta-learning",
"authors": [
{
"first": "Chelsea",
"middle": [],
"last": "Finn",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances In Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelsea Finn, Kelvin Xu, and S. Levine. 2018. Proba- bilistic model-agnostic meta-learning. In Advances In Neural Information Processing Systems.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Findings of the WMT 2019 shared tasks on quality estimation",
"authors": [
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Federmann",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5401"
]
},
"num": null,
"urls": [],
"raw_text": "Erick Fonseca, Lisa Yankovskaya, Andr\u00e9 F. T. Martins, Mark Fishel, and Christian Federmann. 2019. Find- ings of the WMT 2019 shared tasks on quality esti- mation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 1-10, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Reinforcement learning with deep energy-based policies",
"authors": [
{
"first": "Tuomas",
"middle": [],
"last": "Haarnoja",
"suffix": ""
},
{
"first": "Haoran",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Abbeel",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1352--1361",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. 2017. Reinforcement learning with deep energy-based policies. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learn- ing Research, pages 1352-1361. PMLR.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Distilling the knowledge in a neural network",
"authors": [
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "OpenKiwi: An open source framework for quality estimation",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Kepler",
"suffix": ""
},
{
"first": "Jonay",
"middle": [],
"last": "Tr\u00e9nous",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vera",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "117--122",
"other_ids": {
"DOI": [
"10.18653/v1/P19-3020"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Kepler, Jonay Tr\u00e9nous, Marcos Treviso, Miguel Vera, and Andr\u00e9 F. T. Martins. 2019. OpenKiwi: An open source framework for quality estimation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 117-122, Florence, Italy. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation",
"authors": [
{
"first": "Hyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jong-Hyeok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Seung-Hoon",
"middle": [],
"last": "Na",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "562--568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyun Kim, Jong-Hyeok Lee, and Seung-Hoon Na. 2017. Predictor-estimator using multilevel task learning with stack propagation for neural quality es- timation. In Proceedings of the Second Conference on Machine Translation, WMT 2017, Copenhagen, Denmark, September 7-8, 2017, pages 562-568. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bayesian modelagnostic meta-learning",
"authors": [
{
"first": "Taesup",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jaesik",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Dia",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Sungjin",
"middle": [],
"last": "Ahn",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances In Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taesup Kim, Jaesik Yoon, O. Dia, S. Kim, Yoshua Bengio, and Sungjin Ahn. 2018. Bayesian model- agnostic meta-learning. In Advances In Neural In- formation Processing Systems.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Stein variational gradient descent: A general purpose bayesian inference algorithm",
"authors": [
{
"first": "Qiang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dilin",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "2378--2386",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiang Liu and Dilin Wang. 2016. Stein variational gra- dient descent: A general purpose bayesian inference algorithm. In Advances in Neural Information Pro- cessing Systems 29, pages 2378-2386. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Stein variational policy gradient",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Prajit",
"middle": [],
"last": "Ramachandran",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence, UAI 2017",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Prajit Ramachandran, Qiang Liu, and Jian Peng. 2017. Stein variational policy gradient. In Proceedings of the Thirty-Third Conference on Un- certainty in Artificial Intelligence, UAI 2017, Sydney, Australia, August 11-15, 2017. AUAI Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Uncertainty in model-agnostic meta-learning using variational inference",
"authors": [
{
"first": "Cuong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Thanh-Toan",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Carneiro",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision",
"volume": "",
"issue": "",
"pages": "3090--3100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cuong Nguyen, Thanh-Toan Do, and Gustavo Carneiro. 2020. Uncertainty in model-agnostic meta-learning using variational inference. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 3090-3100.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Reptile: a scalable metalearning algorithm",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Nichol",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Schulman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.02999"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Nichol and John Schulman. 2018. Reptile: a scalable metalearning algorithm. arXiv preprint arXiv:1803.02999, 2(2):1.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Continual quality estimation with online bayesian meta-learning",
"authors": [
{
"first": "Abiola",
"middle": [],
"last": "Obamuyide",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abiola Obamuyide, Marina Fomicheva, and Lucia Spe- cia. 2021. Continual quality estimation with online bayesian meta-learning. In Proceedings of the Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Meta-learning with implicit gradients",
"authors": [
{
"first": "Aravind",
"middle": [],
"last": "Rajeswaran",
"suffix": ""
},
{
"first": "Chelsea",
"middle": [],
"last": "Finn",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sham",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Kakade",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "113--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aravind Rajeswaran, Chelsea Finn, Sham M. Kakade, and Sergey Levine. 2019. Meta-learning with im- plicit gradients. In Advances in Neural Informa- tion Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 113-124.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Transquest at wmt2020: Sentencelevel direct assessment",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "1049--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020. Transquest at wmt2020: Sentence- level direct assessment. In Proceedings of the Fifth Conference on Machine Translation, pages 1049- 1055, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Amortized bayesian meta-learning",
"authors": [
{
"first": "Sachin",
"middle": [],
"last": "Ravi",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Beatson",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations, ICLR 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sachin Ravi and Alex Beatson. 2019. Amortized bayesian meta-learning. In 7th International Confer- ence on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis",
"authors": [
{
"first": "Jurgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jurgen Schmidhuber. 1987. Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Associa- tion for Machine Translation in the Americas: Tech- nical Papers. Cambridge, MA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Findings of the WMT 2020 shared task on quality estimation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "743--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Marina Fomicheva, Er- ick Fonseca, Vishrav Chaudhary, Francisco Guzm\u00e1n, and Andr\u00e9 F. T. Martins. 2020. Findings of the WMT 2020 shared task on quality estimation. In Proceedings of the Fifth Conference on Machine Translation, pages 743-764, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Translation quality and productivity: A study on rich morphology languages",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Aljoscha",
"middle": [],
"last": "Burchardt",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Inguna",
"middle": [],
"last": "Skadina",
"suffix": ""
}
],
"year": 2017,
"venue": "Machine Translation Summit XVI",
"volume": "",
"issue": "",
"pages": "55--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Kim Harris, Aljoscha Burchardt, Marco Turchi, Matteo Negri, and Inguna Skadina. 2017. Translation quality and productivity: A study on rich morphology languages. In Machine Translation Summit XVI, pages 55-71.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Multi-level translation quality prediction with quest++",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Paetzold",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "115--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Gustavo Paetzold, and Carolina Scarton. 2015. Multi-level translation quality prediction with quest++. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26- 31, 2015, Beijing, China, System Demonstrations, pages 115-120. The Association for Computer Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Estimating the sentence-level quality of machine translation systems",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Cancedda",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Dymetman",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
}
],
"year": 2009,
"venue": "13th Conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "28--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Marco Turchi, Nicola Cancedda, Marc Dymetman, and Nello Cristianini. 2009. Estimating the sentence-level quality of machine translation sys- tems. In 13th Conference of the European Associa- tion for Machine Translation, pages 28-37.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning to Learn: Introduction and Overview",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Thrun",
"suffix": ""
},
{
"first": "Lorien",
"middle": [],
"last": "Pratt",
"suffix": ""
}
],
"year": 1998,
"venue": "Learning to Learn",
"volume": "",
"issue": "",
"pages": "3--17",
"other_ids": {
"DOI": [
"10.1007/978-1-4615-5529-2_1"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Thrun and Lorien Pratt. 1998. Learning to Learn: Introduction and Overview. In Learning to Learn, pages 3-17. Springer US, Boston, MA.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Stein variational gradient descent with matrixvalued kernels",
"authors": [
{
"first": "Dilin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ziyang",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Bajaj",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "32",
"issue": "",
"pages": "7834--7844",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dilin Wang, Ziyang Tang, C. Bajaj, and Qiang Liu. 2019. Stein variational gradient descent with matrix- valued kernels. Advances in neural information pro- cessing systems, 32:7834-7844.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Bayesian meta sampling for fast uncertainty adaptation",
"authors": [
{
"first": "Zhenyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ping",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ruiyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Changyou",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "8th International Conference on Learning Representations",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenyi Wang, Yang Zhao, Ping Yu, Ruiyi Zhang, and Changyou Chen. 2020. Bayesian meta sam- pling for fast uncertainty adaptation. In 8th Inter- national Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification",
"authors": [
{
"first": "Yinhao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Zabaras",
"suffix": ""
}
],
"year": 2018,
"venue": "J. Comput. Phys",
"volume": "366",
"issue": "",
"pages": "415--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhao Zhu and Nicholas Zabaras. 2018. Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification. J. Comput. Phys., 366:415-447.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Results obtained using limited training instances for each task on the (a) en-lv and (b) en-cs quality estimation datasets."
},
"TABREF0": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>Gradient Descent with</td></tr><tr><td>Matrix-Valued Kernels</td></tr></table>",
"text": "Let H k denote a reproducing kernel Hilbert space (RKHS) H with kernel k. Wang et al. (2019) observed that the original SVGD as proposed in Liu and Wang (2016) searches for the optimal update direction \u03c6 in RKHS H d k = H k \u00d7 \u2022 \u2022 \u2022 \u00d7 H k , a product of d copies of RKHS of scalar-valued functions, which does not allow the encoding of any potential correlations between different co-ordinates of \u03c6. Wang et al. (2019) proposed Matrix-SVGD, which addressed this limitation by replacing H d k",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Comparison with existing approaches.",
"html": null
}
}
}
}