ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2021.eval4nlp-1.23.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:24.739057Z"
},
"title": "Explaining Errors in Machine Translation with Absolute Gradient Ensembles",
"authors": [
{
"first": "Melda",
"middle": [],
"last": "Eksi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Technical University of Darmstadt",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Erik",
"middle": [],
"last": "Gelbing",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Technical University of Darmstadt",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jonathan",
"middle": [],
"last": "Stieber",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Technical University of Darmstadt",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Chi",
"middle": [
"Viet"
],
"last": "Vu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Technical University of Darmstadt",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Current research on quality estimation of machine translation focuses on the sentence-level quality of the translations. By using explainability methods, we can use these quality estimations for word-level error identification. In this work, we compare different explainability techniques and investigate gradient-based and perturbation-based methods by measuring their performance and required computational efforts 1. Throughout our experiments, we observed that using absolute word scores boosts the performance of gradient-based explainers significantly. Further, we combine explainability methods to ensembles to exploit the strengths of individual explainers to get better explanations. We propose the usage of absolute gradient-based methods. These work comparably well to popular perturbation-based ones while being more time-efficient.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Current research on quality estimation of machine translation focuses on the sentence-level quality of the translations. By using explainability methods, we can use these quality estimations for word-level error identification. In this work, we compare different explainability techniques and investigate gradient-based and perturbation-based methods by measuring their performance and required computational efforts 1. Throughout our experiments, we observed that using absolute word scores boosts the performance of gradient-based explainers significantly. Further, we combine explainability methods to ensembles to exploit the strengths of individual explainers to get better explanations. We propose the usage of absolute gradient-based methods. These work comparably well to popular perturbation-based ones while being more time-efficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Building trustworthy and reliable Machine Translation (MT) systems has been a broad topic in Natural Language Processing (NLP) for the past decade. Advances in research have led to dominant architectures like the BERT transformer models (Devlin et al., 2019; Wolf et al., 2020) , which got widely adapted by the NLP community for a variety of tasks, including automated MT. Pretraining models (McCann et al., 2017; Devlin et al., 2019; on generic corpora has simplified the deployment of performant, yet easily adaptable models, and therefore became a central part of new translation systems which show great progress in terms of their translation quality while maintaining reasonable computational costs (Liu et al., 2020) .",
"cite_spans": [
{
"start": 237,
"end": 258,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 259,
"end": 277,
"text": "Wolf et al., 2020)",
"ref_id": "BIBREF28"
},
{
"start": 393,
"end": 414,
"text": "(McCann et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 415,
"end": 435,
"text": "Devlin et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 705,
"end": 723,
"text": "(Liu et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Traditionally, estimating the quality of a given translation requires human supervisors who are able to identify weaknesses in the translation, 1 Code for this paper is available at https: //github.com/SinisterThaumaturge/ MetaScience-Explainable-Metrics attribute wrongly translated parts and correct them manually (Comparin and Mendes, 2017) . Reference-free (i.e. without access to a reference translation) Quality Estimation (QE) tries to solve this costly and time-consuming process by providing models that are able to assign quality-scores automatically (Fomicheva et al., 2020b) .",
"cite_spans": [
{
"start": 144,
"end": 145,
"text": "1",
"ref_id": null
},
{
"start": 316,
"end": 343,
"text": "(Comparin and Mendes, 2017)",
"ref_id": "BIBREF3"
},
{
"start": 561,
"end": 586,
"text": "(Fomicheva et al., 2020b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "TransQuest (TQ), presented by Ranasinghe et al. (2020) , is a quality estimation framework which won the sentence-level direct assessment shared task in WMT 2020, thus we will utilize TQ primarily in this work as our target model to be explained. This year's 'Explainable Quality Estimation' shared task (Fomicheva et al., 2021) focuses on the evaluation of current QE by using different explainability methods. As the organizers propose, the identification of translation errors therefore should be seen as an explainability problem. Explanations are expected to provide insights into the connection between a given input-output-pair so that humans are able to easily understand the explanation and, if necessary, take action. Further, analyzing a set of predictions of a system with explainability methods helps with comparing the system's decision with human reasoning, ultimately assessing trust in the system (Ribeiro et al., 2016) .",
"cite_spans": [
{
"start": 30,
"end": 54,
"text": "Ranasinghe et al. (2020)",
"ref_id": "BIBREF21"
},
{
"start": 304,
"end": 328,
"text": "(Fomicheva et al., 2021)",
"ref_id": "BIBREF6"
},
{
"start": 914,
"end": 936,
"text": "(Ribeiro et al., 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we will compare five explainability techniques on this year's task dataset (Section 3). We experiment with perturbation-based methods as well as gradient-based methods (Section 4) and propose a simple, yet effective ensembling technique to improve the overall classification performance. Our goal is to provide an overview of each approach's capabilities in terms of classification performance, but also in the context of the computational overhead required for each approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Explainability is already a key research topic for Computer Vision (CV), but there are many rising efforts to make NLP models explainable as well (Pr\u00f6llochs et al., 2019; Rajani et al., 2019) . It can be helpful for different tasks, e.g. automated fact-checking for public health (Kotonya and Toni, 2020) . Frameworks for interpretability were already developed for CV, e.g. iNNvestigate or In-terpretML (Alber et al., 2019; Nori et al., 2019) but specific ones for NLP are also on the rise, like AllenNLP Interpret (Wallace et al., 2019) .",
"cite_spans": [
{
"start": 146,
"end": 170,
"text": "(Pr\u00f6llochs et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 171,
"end": 191,
"text": "Rajani et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 280,
"end": 304,
"text": "(Kotonya and Toni, 2020)",
"ref_id": "BIBREF10"
},
{
"start": 404,
"end": 424,
"text": "(Alber et al., 2019;",
"ref_id": null
},
{
"start": 425,
"end": 443,
"text": "Nori et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 516,
"end": 538,
"text": "(Wallace et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Essential for ensuring trust in models are evaluation metrics that are more aligned with human perception of 'goodness'. For this reason, there are rising efforts in research for the creation of such metrics for various NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Existing popular evaluation metrics for text generation tasks like ROUGE (Lin, 2004) for text summarization and BLEU (Papineni et al., 2002) for MT base their measures on statistical similarity rather than evaluating semantic similarity. Several alternative metrics aim to tackle these shortcomings: unsupervised metrics MoverScore (Zhao et al., 2019) and its reference-free extension XMover-Score (XMS) (Zhao et al., 2020 ) that generates better aligned cross-lingual embeddings on basis of which translation quality can be measured more similarly to how humans do it.",
"cite_spans": [
{
"start": 73,
"end": 84,
"text": "(Lin, 2004)",
"ref_id": "BIBREF11"
},
{
"start": 117,
"end": 140,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF18"
},
{
"start": 332,
"end": 351,
"text": "(Zhao et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 404,
"end": 422,
"text": "(Zhao et al., 2020",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "It should be possible to derive valuable information from sentence-level QE scores for word-level translation error identification. Until now there are no existing approaches for explaining errors in MT based on QE scores that we know of. However, in consideration of the 'Explainable Quality Estimation' shared task that is part of EMNLP 2021, we expect new approaches that address this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We conduct our experiments on the Multilingual Quality Estimation and Automatic Post-editing Dataset (MLQE-PE) (Fomicheva et al., 2020a) . The training and development data for this shared task consists of Romanian-English (Ro-En) and Estonian-English (Et-En) language pairs where word-level gold labels and sentence-level gold scores are provided in addition to source sentences and MT outputs. Data is shown exemplary in Figure 1. Our approach is to estimate word-level scores in an unsupervised fashion to show specific errors in translation. Therefore, we use explanations on sentence-level scores to get word-level scores, which are then evaluated with the given gold standard word-level scores in the development set.",
"cite_spans": [
{
"start": 111,
"end": 136,
"text": "(Fomicheva et al., 2020a)",
"ref_id": "BIBREF7"
},
{
"start": 423,
"end": 432,
"text": "Figure 1.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "The training and development data consists of 7000 and 1000 tokenized sentences respectively for both Estonian-English and Romanian-English. Sentence-level scores range in [0, 100] where higher scores indicate better translations. We predict these with the QE model. Tokens for the sentence quality score (word-level labels) are rated binary, 1 being relevant and thus responsible for low-quality scores and 0 being correct tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Source: Turnul a fost distrus de cutremur , trebuind s\u0203 fie recunstruit \u00een anii urm\u0203tori . MT: The earthquake destroyed the pole , having to be reunified in the years to come . Gold Explanations Source: 1 0 0 0 0 0 1 1 0 0 1 0 0 0 0 Gold Explanations MT: 0 0 0 0 1 1 1 0 0 1 0 0 0 0 0 0 Gold Sentence Level Score: 49.667 Figure 1 : Example Romanian-English sentence data from the MLQE-PE training set (Fomicheva et al., 2020a) . Highlights show wrongly translated tokens.",
"cite_spans": [
{
"start": 401,
"end": 426,
"text": "(Fomicheva et al., 2020a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 321,
"end": 329,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Test data consists of further annotated Et-En and Ro-En sentences. Zero-shot test sets for German-Chinese (De-Zh) and Russian-German (Ru-De) language pairs are also provided in addition, which neither contain word-nor sentence-level annotations. We did not use the training set at all as the models were all pre-trained and evaluated the explanations on the development and test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Methods described in the following section follow a perturbation-based approach. As the name suggests, perturbation-based explainers perturb the inputs randomly, with the goal of observing a changing behavior on the output of the model to be explained, or even on its individual neurons (Shrikumar et al., 2016). Ribeiro et al. (2016) propose the usage of Local Interpretable Model-agnostic Explanations (LIME). As a model-agnostic explainability technique, LIME quickly gained popularity and acceptance not just in the NLP community. The main goal of LIME is to explain any complex model f : R d \u2192 R by creating a simple interpretable model g \u2208 G (e.g. a sparse linear model), that is locally trustworthy. This means that instead of trying to globally explain the predictions of the model, specific instances x \u2208 R d are selected and explained on a local level. While many modern NLP models such as Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized BERT Pretraining Approach (RoBERTa) or XLM-RoBERTa (XLM-R) (Devlin et al., 2019; Conneau et al., 2020 ) rely on word embeddings as their input representation, these representations contradict the expectations we have for interpretable explanations. As explanations should be easily understandable, Local Interpretable Model-agnostic Explanations (LIME) is using a word-level representation that enables humans to understand the influence of each word on the decision of the underlying classifier or regressor. For each instance to be explained, the original input is transformed into an interpretable representation x \u2032 \u2208 {0,",
"cite_spans": [
{
"start": 313,
"end": 334,
"text": "Ribeiro et al. (2016)",
"ref_id": "BIBREF22"
},
{
"start": 1042,
"end": 1063,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 1064,
"end": 1084,
"text": "Conneau et al., 2020",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Perturbation-based methods",
"sec_num": "4.1"
},
{
"text": "1} d \u2032 , d \u2032 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIME",
"sec_num": "4.1.1"
},
{
"text": "number of words in a sentence, which holds a binary vector denoting whether a certain component is present or not. To explain a specific instance, the input x \u2032 is perturbed randomly (e.g. masking for text classification). The resulting perturbations are weighted by a proximity function \u03c0 x (e.g. cosine distance for text classification). The proximity function is a distance measure where an input that has been heavily modified should have a high distance to the original, which means that the explanation will probably also differ more strongly. The weighting process ensures that the resulting explanations are locally faithful, as more similar perturbations have a higher impact on the loss of the objective function and therefore on the overall explanation. Additionally, the resulting perturbations are used to minimize an error function \u03be(x), e.g. linear least squares in our case. Figure 2 shows the LIME explanations for the pre-trained MonoTransQuest (MTQ) estimator on an example translation.",
"cite_spans": [],
"ref_spans": [
{
"start": 891,
"end": 899,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "LIME",
"sec_num": "4.1.1"
},
{
"text": "Lundberg and Lee (2017) propose SHapley Additive exPlanations (SHAP) which builds upon various existing explainability techniques unified in a class of additive feature attribution methods. To explain certain instances, SHAP values are utilized to measure the influence of features towards a certain prediction. As the source sentence is fixed, we are only interested in the explanation of the target sentence. Each word can be seen as an interpretable feature holding a score that denotes its influence on the decision of the estimator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SHAP",
"sec_num": "4.1.2"
},
{
"text": "Additive Feature Attribution Methods are a collection of methodologies that all utilize a linear function as their explanation model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SHAP",
"sec_num": "4.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g(z \u2032 ) = \u03d5 + M i=1 \u03d5 i z \u2032 i ,",
"eq_num": "(1)"
}
],
"section": "SHAP",
"sec_num": "4.1.2"
},
{
"text": "The summation of the attributed effect \u03d5 i of each feature from z \u2032 approaches the original model f (x). Lundberg and Lee (2017) show that various current explainers like LIME or DeepLIFT (Shrikumar et al., 2017; Bach et al., 2015 ) match the definition in (1) and can therefore be transformed into an additive feature attribution method.",
"cite_spans": [
{
"start": 188,
"end": 212,
"text": "(Shrikumar et al., 2017;",
"ref_id": "BIBREF23"
},
{
"start": 213,
"end": 230,
"text": "Bach et al., 2015",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SHAP",
"sec_num": "4.1.2"
},
{
"text": "Input Influence (Lipovetsky and Conklin, 2001; Bach et al., 2015; Strumbelj and Kononenko, 2014) . Shapley values provide a measure for the importance of individual features and are utilized in combination with the additive feature attribution methods to derive SHAP values. Based on these values, the SHAP explainer is able to attribute the impact of each feature to the overall prediction. SHAP conditions on one feature at a time and incrementally adds up the other features to determine \u03d5 i . As the calculated effect is often dependant on the order of the features presented to the equation, \u03d5 i is computed recursively for all possible orderings of features and then averaged (Shrikumar et al., 2016; Lundberg and Lee, 2017) .",
"cite_spans": [
{
"start": 16,
"end": 46,
"text": "(Lipovetsky and Conklin, 2001;",
"ref_id": "BIBREF12"
},
{
"start": 47,
"end": 65,
"text": "Bach et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 66,
"end": 96,
"text": "Strumbelj and Kononenko, 2014)",
"ref_id": "BIBREF25"
},
{
"start": 682,
"end": 706,
"text": "(Shrikumar et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 707,
"end": 730,
"text": "Lundberg and Lee, 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SHAP Values are based on Shapley regression values, Shapley sampling values and Quantitative",
"sec_num": null
},
{
"text": "Occlusion is a generalization of the sliding window approach presented by Zeiler and Fergus (2014). The algorithm replaces (contiguous) patches of the input with a baseline (in our case: zero-scalar, denoting the presence or absence of a feature). Importance scores are calculated by measuring the effect of the perturbation in form of the difference in the predictions of the output layer. Therefore, we test each feature independently by comparing the output of the model if the feature is enabled (it takes its original value) versus disabling it (replacing the feature with the baseline). The resulting heatmap represents the attributions of each feature. Importance scores of the output are used to propagate their influence back to the inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Occlusion",
"sec_num": "4.1.3"
},
{
"text": "Gradient-based explainers (i.e. backpropagationbased explainers) are based on the idea of propagating an importance score measured at an individual output backwards to the network (Shrikumar et al., 2016) . We use these methods targeting the embedding layers of the QE model. The scores which we can propagate towards this layer serve as the explanations for individual embeddings and therefore for the overall model. Usually, these methods are more lightweight and thus require less computational overhead.",
"cite_spans": [
{
"start": 180,
"end": 204,
"text": "(Shrikumar et al., 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based methods",
"sec_num": "4.2"
},
{
"text": "Layer Gradient X Activation is based on gradient*input which is a very simple explanation technique (Baehrens et al., 2010) . Since the gradients represent how, and how strongly the model will behave for each input dimension, they can be seen as an expression of importance. Unfortunately, the gradient is only an accurate representation of importance locally when considering small steps. Layer Gradient X Activation is based on Shrikumar et al. (2016) , which is the preceding work to DeepLIFT (Section 4.2.2). We use this approach to compute the element-wise product of gradients and activation, but contrary to regular gradient*input we only apply the method on the hidden embedding layer of the quality estimator, in order to retrieve explanations.",
"cite_spans": [
{
"start": 100,
"end": 123,
"text": "(Baehrens et al., 2010)",
"ref_id": "BIBREF2"
},
{
"start": 430,
"end": 453,
"text": "Shrikumar et al. (2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Layer Gradient X Activation",
"sec_num": "4.2.1"
},
{
"text": "DeepLIFT (Learning Important FeaTures) (Shrikumar et al., 2017) is used to explain instances by measuring the effect C \u2206x i \u2206t of a feature x i on the overall prediction if set to a predefined reference value compared to its true value. The summation of effects for each input:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DeepLIFT",
"sec_num": "4.2.2"
},
{
"text": "n i=1 C \u2206x i \u2206t = \u2206t (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DeepLIFT",
"sec_num": "4.2.2"
},
{
"text": "is called the summation-to-delta property with t being the reference activation of the output and \u2206t representing the difference-from-reference. Equation (2) conducts that the difference-from-reference can be determined for each x i in the context of the reference value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DeepLIFT",
"sec_num": "4.2.2"
},
{
"text": "As the authors propose, DeepLIFT uses a set of rules (i.e. Linear, Rescale and RevealCancel) for assigning importance scores which can be seen as approximations of Shapley values (Section 4.1.2). In fact, the linear and rescale rules were presented in an earlier version of their work whereas the revealcancel rule was published later as an improved version, avoiding specific pitfalls which they describe in their work in detail. These rules are used to map the contribution scores of neurons to their immediate inputs, and further, to any input for a given target output by utilizing backpropagation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DeepLIFT",
"sec_num": "4.2.2"
},
{
"text": "We used Integrated Gradients (IG) (Sundararajan et al., 2017) as an additional gradient-based method during our experiments (see Appendix A), but we will not go into this method further as it does not have any noteworthy performance compared to the other methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DeepLIFT",
"sec_num": "4.2.2"
},
{
"text": "Throughout our experiments, we often observed that wrong words got high positive gradient explanations for the MTQ model. To further investigate this, we propose a different way of using the output of the explainers by just taking the absolute values for each word score. Therefore, we assume that wrong words do not have large negative gradients but instead that their gradient magnitude is higher than those of correctly translated words, leading to the intuition that wrong words have higher deviations in negative, but also positive direction with respect to the gradient. We labeled absolute methods with the prefix Abs. in our experiments (Section 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Absolute methods",
"sec_num": "4.3"
},
{
"text": "To improve our performance of the task dataset, we consider simple, yet effective ensembling techniques of the previously described approaches. In short, we use an unsupervised ensemble method to combine the different explainers. Therefore, we tested combining the individual explanations using the minimum, maximum or mean explanation values of all members in the ensemble. With these simple voting strategies, we were able to find ensembles that outperformed their individual members by a significant margin while maintaining reasonable computational extra costs. Our most successful ensembles are visualized in Figure 4 . We report on our results in Section 5.2.1 and 5.2.2 in more detail.",
"cite_spans": [],
"ref_spans": [
{
"start": 614,
"end": 622,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Ensemble methods",
"sec_num": "4.4"
},
{
"text": "All of our gradient-based methods (Section 4.2) as well as the Occlusion approach (Section 4.1.3) were implemented using the open source Captum Model Interpretability Library (Kokhlikyan et al., 2019) .",
"cite_spans": [
{
"start": 175,
"end": 200,
"text": "(Kokhlikyan et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.1"
},
{
"text": "We base our results primarily on the outputs of the MonoTransQuest (MTQ) model but also tried experimenting with XMover-Score (XMS) whenever possible (mentioned in Section 2). Due to difficulties of making the XMover-Score (XMS) model work with Captum, we were only able to retrieve predictions of the quality estimator for LIME and SHAP, for which we used their original implementations published by the authors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.1"
},
{
"text": "We were presented with two strong baselines by the shared task, XMS+SHAP and MTQ+LIME, as well as a weak baseline of random explanations. We extended the baselines with an all-zero baseline for both language pairs, which serves as another weak baseline where no explanations are used at all. A detailed overview of our results can be found in Table 1 and Table 4 . We evaluated each explainer for both language pairs and also used simple unsupervised ensembles of specific explainers.",
"cite_spans": [],
"ref_spans": [
{
"start": 343,
"end": 362,
"text": "Table 1 and Table 4",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.1"
},
{
"text": "We conducted our experiments with MTQ using those pre-trained models for the respective language pair. We performed additional tests for the zero-shot language pairs using the any-to-any model, as no specialized pre-trained models were available for these language pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.1"
},
{
"text": "All methods were used with the initial parameters, except for LIME where we explored the difference in using 1000 samples instead of the default 5000 samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.1"
},
{
"text": "To receive the gradient-based explanation we use the embedding layer of the MTQ model which uses XLM-RoBERTa word embeddings. This embedding layer raises the problem of WordPiece embeddings (Wu et al., 2016) : specific words are not only represented by one word embedding but can consist of multiple WordPiece embeddings where each of them will result in an explanation. The number of explanations will therefore be higher than the actual word counts in most cases (see Figure 3) .",
"cite_spans": [
{
"start": 190,
"end": 207,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 470,
"end": 479,
"text": "Figure 3)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Word piece explanations",
"sec_num": "5.2"
},
{
"text": "Text: Turnul a fost distrus de cutremur , trebuind s\u0203 fie recunstruit \u00een anii urm\u0203tori . 15Embedded text: _Turn, ul, _a, _fost, _distr, us, _de, _cutremur, _, _trebui, nd, _s\u0203, _fie, _recu, n, stru, it, _\u00een, _ani, i, _urm\u0203tor, i, _. (23) In order to overcome this problem, we assume that if one WordPiece is translated wrong, the whole word translation is wrong. For the nonabsolute methods specifically (we do not use the absolute explanations here) we take the minimum of the different WordPiece explanations. The underlying assumption is that if one WordPiece has a low score it indicates that this WordPiece and therefore the whole word is translated wrong. For the absolute methods, which assume that wrong words have larger absolute values, we use the maximum of the absolute WordPiece explanations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word piece explanations",
"sec_num": "5.2"
},
{
"text": "We conducted additional experiments with different methods, e.g. using mean or absolute minimum, but the results were not on a par with the abovementioned methods for all explainers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word piece explanations",
"sec_num": "5.2"
},
{
"text": "Since LIME and SHAP are model-agnostic and do not use WordPieces for their explanations, this problem does not occur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word piece explanations",
"sec_num": "5.2"
},
{
"text": "Our results show that most gradient-based methods are hardly able to compete with the baseline of MTQ+LIME. While gradient-based explainers remain clearly below the baseline's values on both datasets, we could achieve at least comparable results using DeepLIFT or Layer Gradient X Activation (LGXA) with absolute word scores. As it can be seen in Table 1 , we can achieve comparable, if not even slightly improved AUC scores for these methods while maintaining mostly equal precision and recall values. In general, our experiments show that the usage of absolute word scores fails across the board for perturbation-based approaches, leading to even worse results than both weak baselines. Considering our initial observation (Section 4.3), this is not a big surprise as it is only valid for gradient-based methods. For precisely those approaches, however, we can see an improvement in every case if we consider just taking the absolute word scores of the respective explainer instead of the signed values. Our best performing approach, Abs. DeepLIFT outperforms MTQ+LIME with a difference of 0.056 in AUC on the Ro-En dataset and 0.029 for Et-En.",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 354,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Individual explainers",
"sec_num": "5.2.1"
},
{
"text": "The baseline of XMS+SHAP has improved AUC values, but lower precision and recall scores than MTQ+LIME. Comparing our methods to XMS+SHAP reflects these differences, ultimately leading to Abs. DeepLIFT and Abs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Individual explainers",
"sec_num": "5.2.1"
},
{
"text": "LGXA now consequently outperforming the baseline on all measured scores. The AUC values for Abs. DeepLIFT are better than XMS+SHAP by 0.037 for Ro-En and by 0.029 for Et-En.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Individual explainers",
"sec_num": "5.2.1"
},
{
"text": "Across just the perturbation-based methods we can see mostly similar performances close to the XMS+SHAP baseline. Because the runtime of LIME with the default 5000 samples was too high, we only used 1000 samples. This decrease in runtime only reduced the performance marginally. For the language pair Ro-En there was only a decrease in AUC of 0.001 and 0.005 for Et-En. As the word scores for LIME were not available, we needed to calculate the word scores of LIME for our ensembles. Experiments with 1000 samples were sufficient. Occlusion, our additional perturbationbased explainer, performed worse than LIME and SHAP with AUCs of 0.59 for Ro-En and 0.533 for Et-En.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Individual explainers",
"sec_num": "5.2.1"
},
{
"text": "We observe that the results of LIME and SHAP with XMS are better than the MTQ results. Experiments with XMS could thus further improve the explainability performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Individual explainers",
"sec_num": "5.2.1"
},
{
"text": "We can see that the simple ensemble methods, described in Section 4.4, are able to improve the explanation score for absolute and non-absolute gradient methods. For the non-absolute method, a maximum ensemble of IG, DeepLift and LGXA results in an AUC score of 0.599 which is an improvement of 0.032 compared to the best single explainer LGXA. We see an improvement of 0.007 in the group of absolute gradient methods with the LGXA, reaching 0.682. There is also an improvement in the values if we use an ensemble of all perturbation-based methods. The maximum ensemble of them was better than the best performing perturbation-based method LIME+XMS by 0.035. Using all the methods and combining them into an ensemble did not increase the performance of the explainers. The all-method ensemble achieves the same AUC score as the ensemble with Abs. DeepLift and Abs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensembles",
"sec_num": "5.2.2"
},
{
"text": "LGXA for the language pair Ro-En, but it was worse in all other metrics. Our best performing explainer is the ensemble consisting of Abs. DeepLift and Abs. LGXA. Figure 4 shows the AUC scores of the models compared to the duration in seconds for generating a single explanation. It can be seen that gradientbased methods have a massive advantage compared to perturbation-based methods in terms of the required computation time. Although we only measured execution times naively, the collected values should be sufficient to obtain a rough picture of how the individual methods compare to each other. Since perturbation-based methods do multiple forward passes while gradient-based methods use only one, we can see that gradient-based methods are much faster than perturbation-based ones overall. While LGXA only takes about 0.1 seconds, LIME with 5000 samples can take more than 30 seconds for a single sentence explanation.",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 170,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Ensembles",
"sec_num": "5.2.2"
},
{
"text": "Occlusion proves to achieve worse results than LIME and SHAP but with a faster execution time. However, Occlusion still requires significantly longer execution times than most gradient-based approaches, making it not competitive with the other methods. Since our ensembles are simple operations, their duration is only the sum of the duration times of each ensemble member. As shown, our ensemble with gradient-based methods is not only significantly faster but also performs better than the baselines of LIME and SHAP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation duration",
"sec_num": "5.3"
},
{
"text": "We did further experiments with the provided language pairs De-Zh and Ru-De. The results can be seen in Table 2 . We tried using our best-performing method, the maximum ensemble of Abs. DeepLift and Abs.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Zero-shot explainers",
"sec_num": "5.4"
},
{
"text": "LGXA. By using this setup we could easily include IG into the list of experiments. Surprisingly, absolute methods, except Abs. IG, perform worse than their non-absolute counterparts for De-Zh. The best performing method for De-Zh was the traditional ensemble consisting of all gradient-based methods. With an AUC of only 0.569 the explainer performance is insufficient. We can observe similar performances for the language pair Ru-De where the absolute methods are worse. The exception is Abs. DeepLift with the best performing AUC value of 0.621 and an AP value of 0.511 which is even better than the best score for Et-En. These experiments illustrate that the performances of the explainers vary significantly for each language pair. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-shot explainers",
"sec_num": "5.4"
},
{
"text": "For each language pair we chose methods that performed best on the development sets as our submission methods. Our results are shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Shared task performance",
"sec_num": "5.5"
},
{
"text": "The any-to-any tokenizer had problems with tokenizing Russian and Chinese sentences, which led to the problem of getting less word level explanations than necessary. We solved this by padding the explanations with the default value of 1 as a simple fix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared task performance",
"sec_num": "5.5"
},
{
"text": "For the Et-En and Ro-En language pairs, our ensembling approach outperformed the random and XMS+SHAP baselines, while performing on par with MTQ+LIME. On the Ru-De and De-Zh data, our approach hardly shows improvements over the given baselines which probably results due to the weak performance of the any-to-any model on these language pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared task performance",
"sec_num": "5.5"
},
{
"text": "The dependency on the performance of QE models is the main limitation of our approaches. Explanations are generated in a pipeline fashion where potential errors will be propagated into the explanation models. MTQ and XMS work quite well already, but there is still room for improvement, especially for language pairs that are not that similar. It is likely that we could achieve better explanations with better QEs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Another limitation of absolute gradient-based methods is their model-awareness where explainers need access to a QE model's training procedure in order to calculate the gradients. In cases where explanations should be generated in a black-box manner, perturbation-based methods like LIME or SHAP are better suited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We showed that perturbation-based methods work generally well for predicting errors in MT based on QE and that existing gradient-based methods perform quite poorly in comparison. Our proposed absolute gradient method is a simple extension of those existing methods, but with large performance improvements. However, absolute perturbations seem to worsen the performance of existing perturbation-based methods. Explainer ensembles outperformed single explainers in all cases, where maximum ensembles generally worked best. Absolute explanations also improved gradient-based ensembles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Our experiments justify the popularity of perturbation-based explainers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Nonetheless, gradient-based methods should not be overlooked. They are not only faster in comparison, but with the extension of absolute explanation ensembles can also perform better for the given task and are hence worth to consider.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We showed that absolute gradient-based methods are worthy contenders to perturbation-based methods when it comes to generating plausible word-level explanations for MT. Explainer ensembles also exploit the strengths of their individual members and yield better explanations, be they perturbation-based or gradient-based ones. Gradient-based methods have the potential to be used in online applications given that they are more time-efficient than popular perturbation-based approaches, even as ensembles. Black-box models however are better explained with regular perturbation-based methods. Future work might explore training QE and explanation methods end-to-end, find better performing (multilingual) QE models, or train models on word-level information. One could also try to solve the given problem with recently proposed explanation methods that try to tackle problems of existing explainers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Another way of improving the explanation scores might be using supervised ensemble methods on different explainers by using the training dataset to train e.g. a simple decision tree. The training dataset could be also used to finetune the QE models. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "This work was created in the context of the Meta-Science Seminar at the Technical University of Darmstadt. We thank Steffen Eger for his supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Binder",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Frederick",
"middle": [],
"last": "Klauschen",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
}
],
"year": 2015,
"venue": "PLOS ONE",
"volume": "10",
"issue": "7",
"pages": "1--46",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0130140"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Bach, Alexander Binder, Gr\u00e9goire Montavon, Frederick Klauschen, Klaus-Robert M\u00fcller, and Wo- jciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise rele- vance propagation. PLOS ONE, 10(7):1-46.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "How to explain individual classification decisions",
"authors": [
{
"first": "David",
"middle": [],
"last": "Baehrens",
"suffix": ""
},
{
"first": "Timon",
"middle": [],
"last": "Schroeter",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Harmeling",
"suffix": ""
},
{
"first": "Motoaki",
"middle": [],
"last": "Kawanabe",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2010,
"venue": "J. Mach. Learn. Res",
"volume": "11",
"issue": "",
"pages": "1803--1831",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert M\u00fcller. 2010. How to explain individual classifica- tion decisions. J. Mach. Learn. Res., 11:1803-1831.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using error annotation to evaluate machine translation and human post-editing in a business environment",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Comparin",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Mendes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EAMT 2017",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Comparin and Sara Mendes. 2017. Using error annotation to evaluate machine translation and human post-editing in a business environment. Proceedings of EAMT 2017, Prague, May 29, 31.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics, ACL 2020, On- line, July 5-10, 2020, pages 8440-8451. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The eval4nlp shared task on explainable quality estimation: Overview and results",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Piyawat",
"middle": [],
"last": "Lertvittayakumjorn",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Piyawat Lertvittayakumjorn, Wei Zhao, Steffen Eger, and Yang Gao. 2021. The eval4nlp shared task on explainable quality estima- tion: Overview and results. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "MLQE-PE: A multilingual quality estimation and post-editing dataset",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Lopatina",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.04480"
]
},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Shuo Sun, Erick Fonseca, Fr\u00e9d\u00e9ric Blain, Vishrav Chaudhary, Francisco Guzm\u00e1n, Nina Lopatina, Lucia Specia, and Andr\u00e9 F. T. Martins. 2020a. MLQE-PE: A multilingual quality esti- mation and post-editing dataset. arXiv preprint arXiv:2010.04480.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020b. Unsupervised quality estimation for neural machine translation",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
}
],
"year": null,
"venue": "Trans. Assoc. Comput. Linguistics",
"volume": "8",
"issue": "",
"pages": "539--555",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Fr\u00e9d\u00e9ric Blain, Francisco Guzm\u00e1n, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Spe- cia. 2020b. Unsupervised quality estimation for neu- ral machine translation. Trans. Assoc. Comput. Lin- guistics, 8:539-555.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Explainable automated fact-checking for public health claims",
"authors": [
{
"first": "Neema",
"middle": [],
"last": "Kotonya",
"suffix": ""
},
{
"first": "Francesca",
"middle": [],
"last": "Toni",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "2020",
"issue": "",
"pages": "7740--7754",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.623"
]
},
"num": null,
"urls": [],
"raw_text": "Neema Kotonya and Francesca Toni. 2020. Explainable automated fact-checking for public health claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 7740- 7754. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Analysis of regression in game theory approach",
"authors": [
{
"first": "Stan",
"middle": [],
"last": "Lipovetsky",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Conklin",
"suffix": ""
}
],
"year": 2001,
"venue": "Applied Stochastic Models in Business and Industry",
"volume": "17",
"issue": "",
"pages": "319--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stan Lipovetsky and Michael Conklin. 2001. Anal- ysis of regression in game theory approach. Ap- plied Stochastic Models in Business and Industry, 17(4):319-330.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multilingual denoising pretraining for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Trans. Assoc. Comput. Linguistics",
"volume": "8",
"issue": "",
"pages": "726--742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre- training for neural machine translation. Trans. Assoc. Comput. Linguistics, 8:726-742.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A unified approach to interpreting model predictions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Su-In",
"middle": [],
"last": "Lundberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4765--4774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott M. Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Ad- vances in Neural Information Processing Systems 30: Annual Conference on Neural Information Process- ing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4765-4774.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learned in translation: Contextualized word vectors",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6294--6305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In Advances in Neural Information Processing Systems 30: Annual Confer- ence on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 6294-6305.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Interpretml: A unified framework for machine learning interpretability",
"authors": [
{
"first": "Harsha",
"middle": [],
"last": "Nori",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Jenkins",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Koch",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.09223"
]
},
"num": null,
"urls": [],
"raw_text": "Harsha Nori, Samuel Jenkins, Paul Koch, and Rich Caruana. 2019. Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311-318. ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning interpretable negation rules via weak supervision at document level: A reinforcement learning approach",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Pr\u00f6llochs",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Feuerriegel",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Neumann",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis",
"volume": "1",
"issue": "",
"pages": "407--413",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1038"
]
},
"num": null,
"urls": [],
"raw_text": "Nicolas Pr\u00f6llochs, Stefan Feuerriegel, and Dirk Neu- mann. 2019. Learning interpretable negation rules via weak supervision at document level: A reinforce- ment learning approach. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Min- neapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 407-413. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Explain yourself! leveraging language models for commonsense reasoning",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nazneen Fatema Rajani",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "1",
"issue": "",
"pages": "4932--4942",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1487"
]
},
"num": null,
"urls": [],
"raw_text": "Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain your- self! leveraging language models for commonsense reasoning. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Vol- ume 1: Long Papers, pages 4932-4942. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Transquest: Translation quality estimation with cross-lingual transformers",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "2020",
"issue": "",
"pages": "5070--5081",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.445"
]
},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020. Transquest: Translation quality esti- mation with cross-lingual transformers. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 5070- 5081. International Committee on Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "why should i trust you?\": Explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {
"DOI": [
"10.1145/2939672.2939778"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"why should i trust you?\": Explain- ing the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, page 1135-1144, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning important features through propagating activation differences",
"authors": [
{
"first": "Avanti",
"middle": [],
"last": "Shrikumar",
"suffix": ""
},
{
"first": "Peyton",
"middle": [],
"last": "Greenside",
"suffix": ""
},
{
"first": "Anshul",
"middle": [],
"last": "Kundaje",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "3145--3153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avanti Shrikumar, Peyton Greenside, and Anshul Kun- daje. 2017. Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3145-3153. PMLR.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Not just a black box: Learning important features through propagating activation differences",
"authors": [
{
"first": "Avanti",
"middle": [],
"last": "Shrikumar",
"suffix": ""
},
{
"first": "Peyton",
"middle": [],
"last": "Greenside",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Shcherbina",
"suffix": ""
},
{
"first": "Anshul",
"middle": [],
"last": "Kundaje",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. 2016. Not just a black box: Learning important features through propagating ac- tivation differences. CoRR, abs/1605.01713.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Explaining prediction models and individual predictions with feature contributions",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Strumbelj",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Kononenko",
"suffix": ""
}
],
"year": 2014,
"venue": "Knowl. Inf. Syst",
"volume": "41",
"issue": "3",
"pages": "647--665",
"other_ids": {
"DOI": [
"10.1007/s10115-013-0679-x"
]
},
"num": null,
"urls": [],
"raw_text": "Erik Strumbelj and Igor Kononenko. 2014. Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst., 41(3):647- 665.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Axiomatic attribution for deep networks",
"authors": [
{
"first": "Mukund",
"middle": [],
"last": "Sundararajan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Qiqi",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "3319--3328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceed- ings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3319-3328. PMLR.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Allennlp interpret: A framework for explaining predictions of NLP models",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Tuyls",
"suffix": ""
},
{
"first": "Junlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {
"DOI": [
"10.18653/v1/D19-3002"
]
},
"num": null,
"urls": [],
"raw_text": "Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Sub- ramanian, Matt Gardner, and Sameer Singh. 2019. Allennlp interpret: A framework for explaining pre- dictions of NLP models. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, Novem- ber 3-7, 2019 -System Demonstrations, pages 7-12. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 -Demos",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Pro- ceedings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing: System Demon- strations, EMNLP 2020 -Demos, Online, November 16-20, 2020, pages 38-45. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Oriol Vinyals",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine trans- lation. CoRR, abs/1609.08144.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Visualizing and understanding convolutional networks",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Zeiler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Vision -ECCV 2014 -13th European Conference",
"volume": "8689",
"issue": "",
"pages": "818--833",
"other_ids": {
"DOI": [
"10.1007/978-3-319-10590-1_53"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler and Rob Fergus. 2014. Visualiz- ing and understanding convolutional networks. In Computer Vision -ECCV 2014 -13th European Con- ference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I, volume 8689 of Lecture Notes in Computer Science, pages 818-833. Springer.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "On the limitations of cross-lingual encoders as exposed by reference-free machine translation evaluation",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glavas",
"suffix": ""
},
{
"first": "Maxime",
"middle": [],
"last": "Peyrard",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "West",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020",
"volume": "",
"issue": "",
"pages": "1656--1671",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.151"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Goran Glavas, Maxime Peyrard, Yang Gao, Robert West, and Steffen Eger. 2020. On the lim- itations of cross-lingual encoders as exposed by reference-free machine translation evaluation. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 1656-1671. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Moverscore: Text generation evaluating with contextualized embeddings and earth mover distance",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Maxime",
"middle": [],
"last": "Peyrard",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Christian",
"middle": [
"M"
],
"last": "Meyer",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019",
"volume": "",
"issue": "",
"pages": "563--578",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1053"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M. Meyer, and Steffen Eger. 2019. Moverscore: Text generation evaluating with contextualized em- beddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 563-578. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Explanations provided by the LIME Text Explainer (Ribeiro et al., 2016) on our example sentence.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "WordPiece example. Pieces without a _-prefix show that the corresponding word consists of different word pieces. Numbers in the brackets show the number of words and the number of WordPieces of the text.",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "AUC scores on the development set for the MonoTransQuest explanations combined with explanation time of the Ro-En language pair. Notice that we use log scale which indicates that especially the perturbation-based methods LIME and SHAP take much longer compared to the gradient-based methods. maximum ensemble of Abs. DeepLift and Abs.",
"type_str": "figure"
},
"TABREF2": {
"num": null,
"text": "Evaluation results for different explanation methods, models and both language pairs on the development set -MTQ: MonoTransQuest, XMS: XMover-Score, AP: average precision, RC: recall on top 5. This table only shows some of the methods that we implemented. The complete list can be found in appendix A. The underlined scores show the best scores for a method type, the bold values show the global maximum and the italic values show the baselines.",
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF4": {
"num": null,
"text": "",
"content": "<table><tr><td>: Results for the zero-shot language pairs De-Zh and Ru-De on the development set (provided gold standard</td></tr><tr><td>of 20 annotated sentence pairs). No baselines were provided. -MTQ: MonoTransQuest, [E]: Ensemble of all</td></tr><tr><td>methods, [E 2,3] ensembles of methods 2 and 3</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF6": {
"num": null,
"text": "Results on the test data of our models that performed best on the development set. All methods use MTQ as their QE model. We include the results of the best performing baseline in terms of AUC score for each language pair for comparison in italic.",
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF8": {
"num": null,
"text": "Results of various explanation methods for the Ro-En and Et-En language pairs on the development set -",
"content": "<table/>",
"html": null,
"type_str": "table"
}
}
}
}