ACL-OCL / Base_JSON /prefixL /json /lnls /2022.lnls-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:41:35.680718Z"
},
"title": "When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Hase",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of North Carolina at Chapel Hill",
"location": {}
},
"email": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of North Carolina at Chapel Hill",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Many methods now exist for conditioning models on task instructions and user-provided explanations for individual data points. These methods show great promise for improving task performance of language models beyond what can be achieved by learning from individual (x, y) pairs. In this paper, we (1) provide a formal framework for characterizing approaches to learning from explanation data, and (2) we propose a synthetic task for studying how models learn from explanation data. In the first direction, we give graphical models for the available modeling approaches, in which explanation data can be used as model inputs, as targets, or as a prior. In the second direction, we introduce a carefully designed synthetic task with several properties making it useful for studying a model's ability to learn from explanation data. Each data point in this binary classification task is accompanied by a string that is essentially an answer to the why question: \"why does data point x have label y?\" (Miller, 2019). We aim to encourage research into this area by identifying key considerations for the modeling problem and providing an empirical test bed for theories of how models can best learn from explanation data. 1",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Many methods now exist for conditioning models on task instructions and user-provided explanations for individual data points. These methods show great promise for improving task performance of language models beyond what can be achieved by learning from individual (x, y) pairs. In this paper, we (1) provide a formal framework for characterizing approaches to learning from explanation data, and (2) we propose a synthetic task for studying how models learn from explanation data. In the first direction, we give graphical models for the available modeling approaches, in which explanation data can be used as model inputs, as targets, or as a prior. In the second direction, we introduce a carefully designed synthetic task with several properties making it useful for studying a model's ability to learn from explanation data. Each data point in this binary classification task is accompanied by a string that is essentially an answer to the why question: \"why does data point x have label y?\" (Miller, 2019). We aim to encourage research into this area by identifying key considerations for the modeling problem and providing an empirical test bed for theories of how models can best learn from explanation data. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A long line of past work has sought to use freetext explanations, rationales, and other similar data to improve machine learning models. Proposed methods use explanations to constrain or regularize the learned model (Zaidan et al., 2007; Small et al., 2011; Ba et al., 2015; Zhang et al., 2016; Srivastava et al., 2017; , to automatically label data for data augmentation (Hancock et al., 2018; Wang et al., 2019a; Awasthi et al., 2020) , as additional supervision (Narang et al., 2020; Hase Figure 1: Hypothetical data and explanations. Here, x is an input that one might expect a model to produce the correct output for after fitting to (x, y) pairs. For some models, x may be sufficient, while others may benefit from additional information provided by e. Pruthi et al., 2021) or intermediate structured variables (Camburu et al., 2018; Rajani et al., 2019; Wiegreffe et al., 2020) , and simply as model inputs (Rupprecht et al., 2018; Co-Reyes et al., 2019; Zhou et al., 2020) .",
"cite_spans": [
{
"start": 216,
"end": 237,
"text": "(Zaidan et al., 2007;",
"ref_id": null
},
{
"start": 238,
"end": 257,
"text": "Small et al., 2011;",
"ref_id": "BIBREF31"
},
{
"start": 258,
"end": 274,
"text": "Ba et al., 2015;",
"ref_id": "BIBREF2"
},
{
"start": 275,
"end": 294,
"text": "Zhang et al., 2016;",
"ref_id": null
},
{
"start": 295,
"end": 319,
"text": "Srivastava et al., 2017;",
"ref_id": "BIBREF32"
},
{
"start": 372,
"end": 394,
"text": "(Hancock et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 395,
"end": 414,
"text": "Wang et al., 2019a;",
"ref_id": "BIBREF37"
},
{
"start": 415,
"end": 436,
"text": "Awasthi et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 465,
"end": 486,
"text": "(Narang et al., 2020;",
"ref_id": null
},
{
"start": 487,
"end": 491,
"text": "Hase",
"ref_id": "BIBREF10"
},
{
"start": 759,
"end": 779,
"text": "Pruthi et al., 2021)",
"ref_id": "BIBREF25"
},
{
"start": 817,
"end": 839,
"text": "(Camburu et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 840,
"end": 860,
"text": "Rajani et al., 2019;",
"ref_id": "BIBREF26"
},
{
"start": 861,
"end": 884,
"text": "Wiegreffe et al., 2020)",
"ref_id": "BIBREF39"
},
{
"start": 914,
"end": 938,
"text": "(Rupprecht et al., 2018;",
"ref_id": "BIBREF29"
},
{
"start": 939,
"end": 961,
"text": "Co-Reyes et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 962,
"end": 980,
"text": "Zhou et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, there are many tasks in NLP where improvements in performance prove elusive even when using thousands of explanations as additional data (Narang et al., 2020; Hase et al., 2020) . A few observations could explain this situation: (1) the modeling space has not been fully explored for these tasks, but improvements are possible; (2) pretrained language models already store the knowledge that the explanations would have provided, so they do not need them; (3) the language models do not need any information that is not already learnable from the task's input-output pairs. We do not yet know which explanation is best, and therefore it would be helpful to more deeply understand the motivations behind existing modeling approaches.",
"cite_spans": [
{
"start": 146,
"end": 167,
"text": "(Narang et al., 2020;",
"ref_id": null
},
{
"start": 168,
"end": 186,
"text": "Hase et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we (1) present a formal framework for characterizing approaches to learning from explanation data, and (2) we propose a synthetic task for studying how models learn from natural language data. Specifically, we first present graphical models for various approaches where explanation data is used either as model inputs, targets, or priors, and we characterize existing methods according to these graphical models. Then, based on past results, we suggest which models might be most appropriate for explanation data. Next, we present a synthetic task which shares important properties with NLP tasks involving explanation data. Constructing this task helps us carefully specify the manner in which we expect explanations to be useful to models. We provide simple experimental verification that the task is solvable by existing Transformer models when using explanations as additional data but very difficult to solve without them. Our aim is to outline promising approaches in the area and contribute a concrete test bed to assist others in developing new models for learning from natural language explanations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In what follows, we discuss our framework for modeling with explanations and relevant work (Sec. 2.1), as well as promising approaches for learning from explanations (Sec. 2.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalizing the Roles of Explanations",
"sec_num": "2"
},
{
"text": "What Is an Explanation? We use the term \"explanation\" to refer to the data one might collect if asking a person to answer the question, \"Why does data point x have label y?\" This is a formulation of the explanation as an answer to a why-question of the kind discussed in Miller (2019) . Rather than try to give a formal definition of the kind of data generated from this question, we proceed with some illustrative examples, shown in Fig. 1 .",
"cite_spans": [
{
"start": 271,
"end": 284,
"text": "Miller (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 434,
"end": 440,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Formalizing the Roles of Explanations",
"sec_num": "2"
},
{
"text": "In this section, we lay out our theory of how explanations may be used in modeling a task, in a standard supervised learning setup for obtaining a MAP estimate of model parameters:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Framework and Relevant Work",
"sec_num": "2.1"
},
{
"text": "\u03b8 = arg max \u03b8 p(\u03b8|X, Y ) p(\u03b8|X, Y ) \u221d p(Y |X, \u03b8)p(\u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Framework and Relevant Work",
"sec_num": "2.1"
},
{
"text": "where Y is a set of labels for inputs X. We refer to the role of Y in this probabilistic model as the target, X as an input, and p(\u03b8) as a prior. Below we describe existing approaches to adding explanations into this framework. An overview of the corresponding graphical models is shown in Fig. 2 Figure 2 : Graphical models for several approaches to using explanations as targets, as inputs, and as priors. Typically past works do not condition on human-given explanations at test time, unless they are designed to not leak the data point label.",
"cite_spans": [],
"ref_spans": [
{
"start": 290,
"end": 296,
"text": "Fig. 2",
"ref_id": null
},
{
"start": 297,
"end": 305,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Formal Framework and Relevant Work",
"sec_num": "2.1"
},
{
"text": "as Multi-Task in Fig. 2 ). For instance, Pruthi et al. (2021) consider using attention weight explanations (from a model) as targets in a multitask framework, and they observe accuracy improvements in what is essentially model distillation. Meanwhile, natural language explanations appear as targets in a multi-task framework, using datasets with explanations for each data point (Camburu et al., 2018; Narang et al., 2020; Hase et al., 2020; Wiegreffe et al., 2020) . None of these works find improvements in task performance from incorporating explanations. It is perhaps even concerning that a model could learn to generate coherent \"explanations\" without the learning of this ability influencing the models that are found for the task.",
"cite_spans": [
{
"start": 41,
"end": 61,
"text": "Pruthi et al. (2021)",
"ref_id": "BIBREF25"
},
{
"start": 380,
"end": 402,
"text": "(Camburu et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 403,
"end": 423,
"text": "Narang et al., 2020;",
"ref_id": null
},
{
"start": 424,
"end": 442,
"text": "Hase et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 443,
"end": 466,
"text": "Wiegreffe et al., 2020)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 17,
"end": 23,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Formal Framework and Relevant Work",
"sec_num": "2.1"
},
{
"text": "Using Explanations as Inputs. Additional inputs may be valuable for solving some tasks. One family of approaches uses explanations as model inputs for each data point (Per Data Point Input in Fig. 2 ). Talmor et al. (2020) systematically study RoBERTa's ability to combine pieces of knowledge for a task by including relevant factoids in the text input. Co-Reyes et al. (2019) provide online natural language feedback to RL agents, and Rupprecht et al. (2018) take a similar approach to interactive image segmentation with language feedback.",
"cite_spans": [
{
"start": 202,
"end": 222,
"text": "Talmor et al. (2020)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 192,
"end": 198,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Formal Framework and Relevant Work",
"sec_num": "2.1"
},
{
"text": "More commonly, approaches do not use human explanations at test time. In ExpBERT (Murty et al., 2020) , a model conditions on vector representations of an input x and a single \"global\" set of explanations in order to make each prediction (Global Set in Fig. 2 ). This approach may not scale well to large numbers of explanations, however. Zhou et al. (2020) treat explanations as latent variables, and at inference time they retrieve explanations from the training data (Retrieval in Fig. 2) . A number of works condition on explanations generated at test time using generative models learned with human explanations as supervision, which are represented as Structured Variable and Per-Label Structured Variable in Fig. 2 (Camburu et al., 2018; Rajani et al., 2019; Kumar and Talukdar, 2020; Hase et al., 2020; Wiegreffe et al., 2020; Zhao and Vydiswaran, 2021) . While such structured variables could be useful in principle, these methods have not produced sustained improvements in model accuracy.",
"cite_spans": [
{
"start": 81,
"end": 101,
"text": "(Murty et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 722,
"end": 744,
"text": "(Camburu et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 745,
"end": 765,
"text": "Rajani et al., 2019;",
"ref_id": "BIBREF26"
},
{
"start": 766,
"end": 791,
"text": "Kumar and Talukdar, 2020;",
"ref_id": "BIBREF13"
},
{
"start": 792,
"end": 810,
"text": "Hase et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 811,
"end": 834,
"text": "Wiegreffe et al., 2020;",
"ref_id": "BIBREF39"
},
{
"start": 835,
"end": 861,
"text": "Zhao and Vydiswaran, 2021)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 253,
"end": 259,
"text": "Fig. 2",
"ref_id": null
},
{
"start": 484,
"end": 491,
"text": "Fig. 2)",
"ref_id": null
},
{
"start": 715,
"end": 721,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Formal Framework and Relevant Work",
"sec_num": "2.1"
},
{
"text": "Lastly, large language models have recently opened the door for using explanations in few-shot in-context learning (Brown et al., 2020) . We represent this approach as Few-shot In-context Learning in Fig. 2 . We do not draw the dependencies between distinct data points in the context that would be implied by the attention graph of Transformers, but instead represent the dependence of each data point on the unknown task \u03c4 , which models evidently do inference over at test time. Initial work in this direction suggests that models of a sufficiently large size (280B parameters) can learn from explanations provided in a few-shot in-context learning setting (Lampinen et al., 2022) .",
"cite_spans": [
{
"start": 115,
"end": 135,
"text": "(Brown et al., 2020)",
"ref_id": null
},
{
"start": 660,
"end": 683,
"text": "(Lampinen et al., 2022)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 200,
"end": 206,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Formal Framework and Relevant Work",
"sec_num": "2.1"
},
{
"text": "Using Explanations as Priors. We group together approaches to defining a distribution over model parameters, including those conditioning on data, p(\u03b8|data). This is a prior over model weights not in the sense that the distribution is independent of data (which it is not), but rather that the posterior parameters are conditioned on the prior. Explanations have been used to constrain the learned model (Srivastava et al., 2017 (Srivastava et al., , 2018 or to place priors over how features are weighted or extracted (Zaidan et al., 2007; Small et al., 2011; Zhang et al., 2016; Ross et al., 2017; Bao et al., 2018; Selvaraju et al., 2019; Stammer et al., 2020; Pruthi et al., 2021; Stacey et al., 2022) . Other works map directly from text to model parameters (Ba et al., 2015; Andreas et al., 2018) . These meth-ods are all effectively described by Regularizer or Hypernetwork in Fig. 2 . Lastly, a few approaches learn to use explanations for automatically labeling data for data augmentation purposes (Hancock et al., 2018; Wang et al., 2019b; Awasthi et al., 2020) , which is effectively fitting to data from a prior distribution given by the labeling mechanism (Data Augmentation in Fig. 2 ).",
"cite_spans": [
{
"start": 404,
"end": 428,
"text": "(Srivastava et al., 2017",
"ref_id": "BIBREF32"
},
{
"start": 429,
"end": 455,
"text": "(Srivastava et al., , 2018",
"ref_id": "BIBREF33"
},
{
"start": 519,
"end": 540,
"text": "(Zaidan et al., 2007;",
"ref_id": null
},
{
"start": 541,
"end": 560,
"text": "Small et al., 2011;",
"ref_id": "BIBREF31"
},
{
"start": 561,
"end": 580,
"text": "Zhang et al., 2016;",
"ref_id": null
},
{
"start": 581,
"end": 599,
"text": "Ross et al., 2017;",
"ref_id": "BIBREF28"
},
{
"start": 600,
"end": 617,
"text": "Bao et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 618,
"end": 641,
"text": "Selvaraju et al., 2019;",
"ref_id": "BIBREF30"
},
{
"start": 642,
"end": 663,
"text": "Stammer et al., 2020;",
"ref_id": "BIBREF35"
},
{
"start": 664,
"end": 684,
"text": "Pruthi et al., 2021;",
"ref_id": "BIBREF25"
},
{
"start": 685,
"end": 705,
"text": "Stacey et al., 2022)",
"ref_id": "BIBREF34"
},
{
"start": 763,
"end": 780,
"text": "(Ba et al., 2015;",
"ref_id": "BIBREF2"
},
{
"start": 781,
"end": 802,
"text": "Andreas et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 1007,
"end": 1029,
"text": "(Hancock et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 1030,
"end": 1049,
"text": "Wang et al., 2019b;",
"ref_id": "BIBREF38"
},
{
"start": 1050,
"end": 1071,
"text": "Awasthi et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 884,
"end": 890,
"text": "Fig. 2",
"ref_id": null
},
{
"start": 1191,
"end": 1197,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Formal Framework and Relevant Work",
"sec_num": "2.1"
},
{
"text": "Based on our review of existing approaches, we make a few key observations that we believe will assist in the design of future techniques:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Promising Models",
"sec_num": "2.2"
},
{
"text": "1. Using free-text explanations as structured variables and as targets do not appear to be promising approaches at the moment (Hase et al., 2020; Narang et al., 2020) .",
"cite_spans": [
{
"start": 126,
"end": 145,
"text": "(Hase et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 146,
"end": 166,
"text": "Narang et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Promising Models",
"sec_num": "2.2"
},
{
"text": "2. Free-text explanations may be useful as priors in computer vision ), but we know of no successful use case for tasks besides Stacey et al. (2022) , which effectively reduces free-text explanations to a bag of words.",
"cite_spans": [
{
"start": 128,
"end": 148,
"text": "Stacey et al. (2022)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Promising Models",
"sec_num": "2.2"
},
{
"text": "3. The only cases we know of where free-text explanations improve model performance on NLP tasks is when they are used as model inputs via the Global Set model, (Murty et al., 2020) a Retrieval model (Zhou et al., 2020) , and an In-Context Learning model using 280B parameters (Lampinen et al., 2022).",
"cite_spans": [
{
"start": 161,
"end": 181,
"text": "(Murty et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 200,
"end": 219,
"text": "(Zhou et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Promising Models",
"sec_num": "2.2"
},
{
"text": "The upshot of these results is that the most promising approaches for learning from explanation data are likely those treating explanations as inputs (in a manner that does not require new explanations at test time). However, we recommend that other graphical models not be ruled out completely, in case there are promising methods in those families that have yet to be explored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Promising Models",
"sec_num": "2.2"
},
{
"text": "Following recent work using synthetic data to investigate sequence modeling questions (Liu et al., 2021; Lovering et al., 2021) , we design a synthetic dataset so that we can carefully control several important data properties. In Fig. 3 , we show an example data point and description of how it gets its label. The premise of our task is to classify sequences by counting different integers in them. Core Idea Behind Data. We wish to design a task where, for a data point (x, y), an explanation An explanation that says why the input received its label, when understood properly",
"cite_spans": [
{
"start": 86,
"end": 104,
"text": "(Liu et al., 2021;",
"ref_id": "BIBREF18"
},
{
"start": 105,
"end": 127,
"text": "Lovering et al., 2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 231,
"end": 237,
"text": "Fig. 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Synthetic Task",
"sec_num": "3"
},
{
"text": "Description: The sequence has label because there are more s than s. The index maps to , and indicator says to count rather than",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Task",
"sec_num": null
},
{
"text": ". If there were more s than s, the label would be . There is a one-to-one map between index values and tuples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Task",
"sec_num": null
},
{
"text": "Count whether there are more of integer a than integer b e communicates information about why input x receives label y. The premise of the task is that a binary label for a sequence of integers x is determined by whether there are more of an integer a in the sequence than there are of an integer b. We refer to integers (a, b) that need to be counted as the label reason. This label reason forms the basis of the explanation for each data point, and it is always exactly specified by the first two integers in x, which we term the index and indicator. We call the data e an explanation because it is a direct encoding of a natural language explanation for the data (x, y). For the data point in Fig. 3 , this natural language explanation is \"input x receives label 1 because it contains more 80's than 40's, and we do not need to count 17's or 27's for this sequence.\" Proposed Dataset. We describe the proposed dataset using some default data parameters for preliminary experiments, but any specific numbers appearing below are easily adjusted. See Supplement D for the full generative process. 1. Train set: 5000 sequences of 20 integers (including index and indicator), each accompanied by an explanation. There are 500 unique values of index in the dataset drawn from unif (1, 10000), so there are 10 points for each index, whose values of m, n, r, and d are drawn from unif (1, 100) while requiring that m =n =r =d. The corresponding 10 values of indicator are split between 1 and 2. Half of the points have label y=1, i.e. either #m>#n or #r>#d, depending on which feature is causal.",
"cite_spans": [],
"ref_spans": [
{
"start": 696,
"end": 702,
"text": "Fig. 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Synthetic Task",
"sec_num": null
},
{
"text": "In each x i , after m, n, r, and d have been randomly placed into the sequence, unfilled slots are filled with samples from unif (1, 100). 2. Dev set: 10,000 points, none appearing in Train, with the same 500 index values, and twice the number of points per index as Train. 3. Test set: 50,000 points of similar construction to the Dev set, but with five times the points per index as Train. Analogous Properties to Human-Curated Data. We claim that aspects of our synthetic task are analogous to properties that natural language data might take on, which we represent in Fig. 3 . First, e is an explanation in the sense that, when understood properly, it is a plausible answer to the question: \"why does point x have label y?\" The explanation describes the feature that causes the label, i.e. the integers that should be counted. We suggest that the index in a sequence is analogous to the topic of some text or the things it refers to: it is an easily computable feature that connects the input to the appropriate explanation. Meanwhile, the indicator is a feature that tells how information from an explanation is relevant to deciding the label. Similarly, an explanation might only be understood in the context of the input it explains.",
"cite_spans": [],
"ref_spans": [
{
"start": 572,
"end": 578,
"text": "Fig. 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Synthetic Task",
"sec_num": null
},
{
"text": "We include experiments below that (1) show explanation data is helpful for solving our task and (2) demonstrate why the task is hard without explanation data. We make use of a retrieval-based model similar to Zhou et al. (2020), which learns to retrieve explanations from the training dataset to help with prediction at test time (details in Appendix B and C). This model is composed of a RoBERTabase classifier (Liu et al., 2019 ) and a SentenceR-oBERTa model used for retrieval (Reimers and Gurevych, 2019) . The baseline in our experiments is the RoBERTa classifer on its own.",
"cite_spans": [
{
"start": 412,
"end": 429,
"text": "(Liu et al., 2019",
"ref_id": "BIBREF19"
},
{
"start": 480,
"end": 508,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Experiments",
"sec_num": "4"
},
{
"text": "to Solve Our Task Design. Using our default dataset containing one explanation per training point, we measure model accuracy with retrieval in a 3 \u00d7 2 design. There are three conditions for the retrieval model: (1) fixed, where the Sentence-RoBERTa retriever is fixed and only the classifier is trained, (2) learned, where both classifier and retriever are trained end- of tasks increases (equivalent to the number of points per task decreasing), reaching accuracies as low as 62.2% at num-tasks= 500. Meanwhile, we observe that providing the index does slightly ease the task inference, but the models can by no means memorize the map from index to the task information. Regarding model capacity, we find that using RoBERTa-large increases model accuracy when the number of num-tasks is relatively low (less than 250), but after this point RoBERTa-base performs better (see Fig. 13 in Appendix B). Lastly, we see that increasing the training set size can greatly improve model performance even with num-tasks= 500, reaching 87.11% with 50,000 training points (trend shown in Fig. 14 in Appendix B). However, we will see in the next section that, in terms of improving model accuracy, even this 10x increase in training size is less effective than using retrieved explanations with 5000 training points.",
"cite_spans": [],
"ref_spans": [
{
"start": 875,
"end": 882,
"text": "Fig. 13",
"ref_id": "FIGREF0"
},
{
"start": 1076,
"end": 1080,
"text": "Fig.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Explanation Retrieval Enables a Model",
"sec_num": "4.1"
},
{
"text": "[ added transition off sample size point -PH ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Retrieval Enables a Model",
"sec_num": "4.1"
},
{
"text": "[ roberta-large results in appendix. better at low-task regime, worse in high-task regime -PH ] 6.2. RQ2: Can retrieval of past explanations enable a model to solve our task?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Retrieval Enables a Model",
"sec_num": "4.1"
},
{
"text": "Design. Using the full-info explanations and data with num-tasks= 500, we measure model accuracy with retrieval in a 3 \u21e5 2 design. There are three conditions for the retrieval model: (1) fixed, where the Sentence-RoBERTa retriever is fixed and only the classifier is trained, (2) learned, where both classifier and retriever are trained end-to-end, and (3) optimal where the optimal retrieval model is used and the classifier is trained. Note that we know the optimal retrieval model assigns the highest probabilities to explanations with index e matching the query point's index x , so by using a retriever p(e i |x i ) = exp ( [index e = index x ]) and a context size lower than n task , we can ensure the retrieved explanations are always relevant. There are two conditions for the conditioning mechanism used: (1) TEXTCAT with C=k=6, and (2) H-MEAN with C=4 and k=4, which approximately matches the computational cost of the TEXTCAT condition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Retrieval Enables a Model",
"sec_num": "4.1"
},
{
"text": "Results. Shown in Fig. 6 , the results show that retrieval with Sentence-BERT improves model accuracy by around 29 percentage points over a no-retrieval baseline. Each conditioning mechanism sees roughly the same improvement. Additionally, we can learn a retrieval model that does nearly as well as the optimal retrieval model, improving over the fixed condition by another 7 points.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 24,
"text": "Fig. 6",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Explanation Retrieval Enables a Model",
"sec_num": "4.1"
},
{
"text": "[ should we add some more reasons + conclusions/takeaways of these numerical results? [ added a couple takeaway sentences -PH ] -MB ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Retrieval Enables a Model",
"sec_num": "4.1"
},
{
"text": "Thus, retrieval of explanations allows the model to perform much better than a no-retrieval baseline. We see a large improvement in performance from retrieval even when the baseline could learn to infer the task information directly from the index value in each input. In fact, explanation retrieval outperforms a no-retrieval baseline with as many as 50,000 training data points (a 10x increase), which obtains 87.11% accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Retrieval Enables a Model",
"sec_num": "4.1"
},
{
"text": "Design. We run the same experiment design as for RQ2, using evidential and recomposable explanations (see Sec. 3.3). With evidential explanations, we shift each integer in the explanation (excluding the index) independently by zero-mean, discrete noise \u270f \u21e0 unif( 2, 2). We use the 2-piece condition for recomposable explanations, meaning two explanations combine to give the full task information. As in RQ1, we show results here for values of C=k=6 for TEXTCAT and C=k=4 for H-MEAN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ3: Can models aggregate information across explanations for better prediction?",
"sec_num": "6.3."
},
{
"text": "Results. We display the results in Fig. 7 . First, we observe that for evidential explanations, learned retrieval is close to-end, and (3) optimal where the optimal retrieval model is used and the classifier is trained. We know the optimal retrieval model retrieves explanations with an index matching the query point's index.",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 41,
"text": "Fig. 7",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "RQ3: Can models aggregate information across explanations for better prediction?",
"sec_num": "6.3."
},
{
"text": "The two conditioning mechanisms, H-MEAN and TEXTCAT, differ in how they combine information across multiple retrieved explanations to produce a final prediction (see Appendix B.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ3: Can models aggregate information across explanations for better prediction?",
"sec_num": "6.3."
},
{
"text": "Results. The results in Fig. 4 show that explanation retrieval can reach accuracies above 98%, improving accuracy by around 37 points over a no-explanation baseline. We also find that the learned retrieval model does as well as the optimal retrieval model, improving over the fixed condition by about 7 points. Thus, access to explanations allows the model to perform much better than a no-explanation baseline. In fact, the explanation retrieval model outperforms a no-explanation baseline with as many as 50,000 training data points (a 10x increase), which obtains 87.11% accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 30,
"text": "Fig. 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "RQ3: Can models aggregate information across explanations for better prediction?",
"sec_num": "6.3."
},
{
"text": "Design. We measure test accuracy as a function of how many unique explanations (and therefore label reasons) there are in the data. While keeping the train set size fixed at 5000 points, we vary how many points share the same explanation (index, m, n, r, d). By default there are 10 points per index, and with 5000 points this means that there are 500 unique explanations in the data. We use many as 2500 points per index, meaning using two unique explanations. The experiment conditions also vary in how task information is available in the input: (1) for With Explanation, each 20-integer sequence x i has its explanation appended to it; (2) for No Explanation, only x i is given, which requires the model to learn the map index \u2192 (m, n, r, d);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Is The Task Hard Without Explanations?",
"sec_num": "4.2"
},
{
"text": "(3) for No Index, the index is omitted from the input, so the model must infer the label reason from the sequence's contents alone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Is The Task Hard Without Explanations?",
"sec_num": "4.2"
},
{
"text": "Results. The results are shown in Fig. 5 . We see that, when the number of unique explanations (and therefore possible label reasons) is small, the No Explanation model can achieve an accuracy as high as if it had been directly given the label reason, i.e. as high as the With Explanation condition. Yet, No Explanation model accuracy falls off quickly with the number of unique explanations, reaching accuracies as low as 62.2% with 500 explanations. Evidently, with this many unique explanations, it is too difficult to learn the map between the index and the latent label reason. Without the index in the input (No Index condition), it is even harder to infer the label reason. While accuracy does rise significantly with the size of the training data (see Fig. 4 ), even using 10x as much train data does not close the gap with the explanation retrieval model.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 40,
"text": "Fig. 5",
"ref_id": "FIGREF5"
},
{
"start": 760,
"end": 766,
"text": "Fig. 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Why Is The Task Hard Without Explanations?",
"sec_num": "4.2"
},
{
"text": "We present a synthetic dataset with key similarities to natural language explanation data, and we show that our explanations are highly useful for model learning. However, we emphasize that if a model already \"knew\" the information in some explanations, it might not need them. This may plausibly occur with sufficiently large pretrained models that store a great deal of factual knowledge (Petroni et al., 2019) . Similarly, the necessary information might be learnable from (X, Y ) data alone. Future work on modeling approaches we outline in this paper (Fig. 2) will benefit from testing their methods on controlled synthetic tasks as a test of their ability to learn from explanation data. Then, further analysis will be helpful for understanding how explanations contain novel information that is not learned elsewhere in pretraining or finetuning.",
"cite_spans": [
{
"start": 390,
"end": 412,
"text": "(Petroni et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 556,
"end": 564,
"text": "(Fig. 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion & Conclusion",
"sec_num": "5"
},
{
"text": "We ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "There are several positive broader impacts from designing methods for learning from human explanations. Foremost among them is the promise of better aligning learned models with human priors on what kinds of behaviors are good, which could be especially helpful when these priors are hard to robustly encode in supervised learning objectives or unlikely to be learned from the available data. Explanations can also greatly improve model sample efficiency, which is broadly beneficial for difficult, time-consuming, or human-in-the-loop tasks where acquiring a large amount of data is expensive and slow. There are still some possible risks to this methodology, mainly involving overconfidence in what explanations can provide. For instance, just because explanations improve a model's performance does not mean the model will behave exactly as a human would. We risk anthropomorphizing machine learning models when we suppose their learned interpretations of explanations matches our own. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": null
},
{
"text": "We give additional experimental results with our synthetic dataset in an extended technical report on this topic, available here: https://arxiv. org/abs/2102.02201. Additional experiments are conducted to answer a research questions including: Here, we introduce our chosen model for incorporating explanation data, which makes use of explanations as model inputs after they are retrieved from the training data (the \"Retrieval\" graphical model in Fig. 2 ). Our approach is similar to Lewis et al. (2020), who marginalize over latent documents retrieved from Wikipedia for question answering, question generation, and fact verification. The marginal distribution is given as:",
"cite_spans": [],
"ref_spans": [
{
"start": 448,
"end": 454,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Additional Experiments",
"sec_num": null
},
{
"text": "p \u0398 (y|x) = e\u2208top-k(p\u03b7(\u2022|x)) p \u03b8 (y|x, e)p \u03b7 (e|x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Experiments",
"sec_num": null
},
{
"text": "where top-k gets the top k texts as ranked by the retrieval model, p \u03b7 (e|x). Note that we never retrieve a data point's own explanation when predicting its label. We do so because explanations can leak the label (Hase et al., 2020) and this approach matches the test-time distribution, where we assume explanations are not collected for new data points (see discussion in Sec. 2). Zhou et al. (2020) also propose to use explanations as latent variables and retrieve explanations at inference time, but they do not learn the retrieval model, marginalize over the latents during inference, or prohibit data point's own explanations from being retrieved. In our experiments, we compare with their original approach and a version where we marginalize over the latents and learn the retrieval model.",
"cite_spans": [
{
"start": 213,
"end": 232,
"text": "(Hase et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Experiments",
"sec_num": null
},
{
"text": "The form of p \u03b7 (e|x) follows Lewis et al. (2020) and . Given a query x, unnormalized probabilities are computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Experiments",
"sec_num": null
},
{
"text": "p \u03b7 (e|x) \u221d exp (f \u03b7 (e) T f \u03b7 (x))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Experiments",
"sec_num": null
},
{
"text": "where f \u03b7 embeds each sequence into a vector. To compute top-k(p \u03b7 (\u2022|x)), we search through the training explanations using FAISS (Johnson et al., 2017) . We discuss methods for computing p \u03b8 (y|x, e) and f \u03b7 (e|x) in Sec. B.1. Because it may be helpful to reason over multiple explanations at once, we extend this model to allow for explanations to be composed into a single \"document.\" Assuming explanations to be conditionally independent given x, we can compute the probability of a set of explanations E = {e c } C c=1 as",
"cite_spans": [
{
"start": 131,
"end": 153,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Experiments",
"sec_num": null
},
{
"text": "p(E|x) \u221d exp ( e\u2208E f \u03b7 (e) T f \u03b7 (x)),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Experiments",
"sec_num": null
},
{
"text": "where (1) a context size C will control the size of the explanation set, (2) a value of k implies that the top Ck will be retrieved, and (3) we sort these Ck explanations into sets in order of their probability p \u03b7 (e|x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Experiments",
"sec_num": null
},
{
"text": "model inputs, with explanations each Marginalize over Compute classifier Retrieval given Figure 6 : A depiction of our retrieval-based method TEXTCAT. A total of Ck explanations are retrieved and allocated into k latent variables, each a set of explanations E, which are marginalized over to produce a final prediction.",
"cite_spans": [],
"ref_spans": [
{
"start": 89,
"end": 97,
"text": "Figure 6",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "A Additional Experiments",
"sec_num": null
},
{
"text": "We represent the overall approach in Fig. 6 for one method of computing p \u03b8 (y|x, E) (described fully in Sec. B.1), where explanations are concatenated with the query sequence. Flowing from left to right, Fig. 6 shows how explanations are retrieved from the training data conditioned on a query sequence x, then allocated into k classifier inputs with C explanations each. The k classifier predictions are aggregated by marginalizing over the latent variable, Z = E.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 43,
"text": "Fig. 6",
"ref_id": "FIGREF2"
},
{
"start": 205,
"end": 211,
"text": "Fig. 6",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "A Additional Experiments",
"sec_num": null
},
{
"text": "Modeling Assumptions. In using retrieval, we make a few assumptions. First, since the number of forward passes per data point scales with k, we require a relatively small value of k, i.e. k \u2264 10, for reasonable computational efficiency in SGDbased training. Hence, we must assume that this summation is sufficiently similar to the full summation over latent variables. This assumption is more likely to hold when (1) a small number of documents account for most of the probability mass in p \u03b7 (e|x), and (2) a pretrained model p \u03b7 (e|x) yields a decent initial rank-ordering, such that some of the best documents are in the top-k. The exact value of k we use depends on the experiment. A second, more basic assumption is that explanations will be useful in predicting other data points' labels. Such an assumption is needed since we never condition on a data point's own explanation. Lastly, during retrieval we assume that explanations are independent given x, i.e. p(E|x) = e\u2208E p(e|x). This could be a poor assumption when, for instance, explanations each contribute one of a number of needed facts, in which case it would be helpful to retrieve additional explanations conditioned on what has already been retrieved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Experiments",
"sec_num": null
},
{
"text": "In this section we describe the methods used to compute p \u03b8 (y|x, E) and p \u03b7 (e|x) (see Sec. B for the overall model description). For the classifier p \u03b8 (y|x, E), we use two methods, TEXTCAT and H-MEAN, which are described below. Then we describe the retrieval model, which is based on Sentence-BERT (Reimers and Gurevych, 2019) .",
"cite_spans": [
{
"start": 301,
"end": 329,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B.1 Conditioning Mechanisms",
"sec_num": null
},
{
"text": "TEXTCAT. Represented in Figure 6 , this method takes a straightforward approach to conditioning on a set of explanations: concatenating C explanations and the input x to form a longer sequence of text. Each of the original sequences is separated by a special token, e.g.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 32,
"text": "Figure 6",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "B.1 Conditioning Mechanisms",
"sec_num": null
},
{
"text": "[SEP] for BERT. In our experiments, we pass this longer sequence into a RoBERTa-base model. After pooling the output token representations, we pass the resulting vector to a 1-layer MLP for classification. We use mean pooling for our synthetic task and NLI; for relation extraction tasks, we concatenate the representations corresponding to the initial tokens in the subject and object words, since this is an especially effective pooling technique (Baldini Soares et al., 2019) .",
"cite_spans": [
{
"start": 449,
"end": 478,
"text": "(Baldini Soares et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B.1 Conditioning Mechanisms",
"sec_num": null
},
{
"text": "This approach allows the model to reason over all of the explanations and the input together. While the method may be limited by the fact that some models can face difficulties in processing long pieces of text (Beltagy et al., 2020) , this issue is partly mitigated by marginalizing over k sets of explanations. As a result of the marginalization, the final prediction can be conditioned on a far higher number (Ck) of individual explanations than could fit in the context alone. ",
"cite_spans": [
{
"start": 211,
"end": 233,
"text": "(Beltagy et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B.1 Conditioning Mechanisms",
"sec_num": null
},
{
"text": "h = 1 C C c=1 RoBERTa(x )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.1 Conditioning Mechanisms",
"sec_num": null
},
{
"text": "which is then passed to the MLP for classification. H-MEAN does not face the same sequence length limitations as TEXTCAT, but by separately processing of each explanations H-MEAN may fail to integrate information across explanations. This method also becomes expensive when we marginalize over E (which is what allows retrieval to be learned), as it requires Ck forward passes for a single prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.1 Conditioning Mechanisms",
"sec_num": null
},
{
"text": "We use a similar approach to retrieval as in Lewis et al. (2020), namely using vector representations of sequences from a pretrained transformer to compute p \u03b7 (e|x) \u221d exp (f \u03b7 (e) T f \u03b7 (x)), which is followed by computing top-Ck(p \u03b7 (\u2022|x). We use an approximate but sub-linear time search method (FAISS) to find the top-Ck points (Johnson et al., 2017) . In our experiments we find that it is necessary to use Sentence-BERT (Reimers and Gurevych, 2019) as our pretrained f \u03b7 , rather than simply a pretrained RoBERTa model. Sentence-BERT is a network trained to produce semantic representations of sentences that can be compared under cosine similarity. In our experiments, we use the Sentence-RoBERTa-base model trained on a combination of several NLI and semantic textual similarity tasks, with mean pooling of token representations. We normalize the representations we obtain from this model, so that our inner product is equivalent to a cosine similarity.",
"cite_spans": [
{
"start": 332,
"end": 354,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 426,
"end": 454,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 Retrieval",
"sec_num": null
},
{
"text": "Note that during training, we never condition on a data point's own explanation when predicting its label. This is an important constraint for matching the train and test-time distributions. At test time, we assume we have access only to past (training) explanations, since they can be expensive to collect and conditioning on explanations at test time can lead to label leakage, meaning what is essentially the benefit of human labeling could be mistaken as improvements in model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 Retrieval",
"sec_num": null
},
{
"text": "C Training Details C.1 Runtimes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 Retrieval",
"sec_num": null
},
{
"text": "Regarding training times, we run most experiments on a single NVIDIA RTX 2080 GPU, with run-times as follows: 4.0 hours for 40 epochs of the noretrieval RoBERTa-base using the synthetic dataset; 5.7 hours for 40 epochs of RoBERTa-large in the same setting; 8.6 hours for 20 epochs of learned retrieval with RoBERTa-base models on synthetic data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 Retrieval",
"sec_num": null
},
{
"text": "For optimization, we use AdamW with a learning rate of 1e\u22125 and gradient norm clipping at norm 1. For the LR, we use a linear warmup and decay schedule peaking at 10% of the training steps for experiments with synthetic data and at 1% for experiments with existing datasets (given the larger training set sizes). The batch size is set to 10 across all experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Training Hyperparameters and Analysis",
"sec_num": null
},
{
"text": "We decide how often to rebuild the representations of training explanations while learning the retrieval model by tuning across frequency values in the range {10%, 20%, 33%, 50%, 100%} (i.e. to rebuild at this percentage of every epoch), as well as never rebuilding. In our synthetic setting, the only noticeable drop in performance comes from never rebuilding. As long as representations are re-encoded at least as often as every epoch, we notice no difference in final test accuracy, though in early experiments we observed that rebuilding more often improved training stability. To err on the safe side of training stability, we re-encode the representations every 20% of each epoch in all experiments except e-SNLI with full data, where we re-encode every 30% of each epoch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Training Hyperparameters and Analysis",
"sec_num": null
},
{
"text": "Additionally, we use the stop-gradient function when computing the gradient of p \u03b7 (e|x) as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Training Hyperparameters and Analysis",
"sec_num": null
},
{
"text": "\u2207 \u03b7 exp (sg[f \u03b7 (e)] T f \u03b7 (x)),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Training Hyperparameters and Analysis",
"sec_num": null
},
{
"text": "meaning that we do not differentiate through the explanation embeddings, but only through the query data point embeddings. In early experiments, we found that this decision contributed to training stability, while improving computational efficiency, and we confirm that we observe no differences in model accuracy as a result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Training Hyperparameters and Analysis",
"sec_num": null
},
{
"text": "We compute confidence intervals for our synthetic data tasks to represent seed variance around some mean seed performance. We represent seed variance in figures rather than sample variance because the sample variance is fairly low with 50,000 test points and could be driven arbitrarily low with more generated test points. For instance, the 95% confidence interval for a model accuracy of 90% would be \u00b10.26. To calculate seed variance, we run 10 random seeds for our baseline condition (no-retrieval) with the default synthetic task setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.3 Experiment Confidence Intervals",
"sec_num": null
},
{
"text": "The required parameters to the data generation include: (1) a training sample size sample-size and (2) num-tasks, the number of unique integer pairs to be counted, or, equivalently, the number of points per index, n task . In all experiments, we use a maximum integer value of 100 to appear in the sequences, and a maximum index value of 10,000. We give the general generative process below. Note that the dev and test sets are constructed with the extra constraint that sequences must not appear in the training data. Further note that this is the generic version of generative process, and in some experiments the process is altered. For example, in RQ3, indicator is always 1 and the construction of the map from index values to (m, n) tuples occurs in a special way described in the experimental design for RQ3. 4. Compute the number of points per index, n task = sample-size // num-tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Synthetic Task Generative Process",
"sec_num": null
},
{
"text": "5. For each index \u2208 {index t } num-tasks",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Synthetic Task Generative Process",
"sec_num": null
},
{
"text": "\u03c4 =1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Synthetic Task Generative Process",
"sec_num": null
},
{
"text": ": (a) Sample a vector of length n task , balanced between 1s and 2s, that gives the values of {indicator p } P p=1 for the P points with that index. (b) Sample a vector of length n task , balanced between 0s and 1s, representing whether the features 1[#m>#n] and 1 [#r>#d] should correlate (1 implies they are equal, and 0 unequal). This balance changes when the strong-weak correlation is intended to change.",
"cite_spans": [
{
"start": 265,
"end": 272,
"text": "[#r>#d]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "D Synthetic Task Generative Process",
"sec_num": null
},
{
"text": "(c) Sample a vector of length n task , balanced between 0s and 1s, representing whether (m, n) or (r, d) should be the more numerous integers in the sequence (so that there is no bias, even randomly, between features by size). (d) For i \u2208 1 : n task :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Synthetic Task Generative Process",
"sec_num": null
},
{
"text": "i. Place the index in the first element of an empty array, and the indicator in the second. ii. Based on the i th elements of the three vectors described above, allocate samples of the integers in (m, n, r, d) index into the remaining 18 slots. iii. If there are any remaining slots after these integers are randomly allocated, fill them with i.i.d. samples from unif (1, 100).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Synthetic Task Generative Process",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning with latent language",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/n18-1197"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Andreas, Dan Klein, and Sergey Levine. 2018. Learning with latent language. In NAACL-HLT 2018.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning from rules generalizing labeled exemplars",
"authors": [
{
"first": "Abhijeet",
"middle": [],
"last": "Awasthi",
"suffix": ""
},
{
"first": "Sabyasachi",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Rasna",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Sunita",
"middle": [],
"last": "Sarawagi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhijeet Awasthi, Sabyasachi Ghosh, Rasna Goyal, and Sunita Sarawagi. 2020. Learning from rules generalizing labeled exemplars. In ICLR 2020.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Predicting deep zero-shot convolutional neural networks using textual descriptions",
"authors": [
{
"first": "Jimmy",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Swersky",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Fidler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "4247--4255",
"other_ids": {
"DOI": [
"10.1109/ICCV.2015.483"
]
},
"num": null,
"urls": [],
"raw_text": "Lei Jimmy Ba, Kevin Swersky, Sanja Fidler, and Rus- lan Salakhutdinov. 2015. Predicting deep zero-shot convolutional neural networks using textual descrip- tions. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, De- cember 7-13, 2015, pages 4247-4255. IEEE Com- puter Society.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Matching the blanks: Distributional similarity for relation learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Livio Baldini",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Soares",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "2895--2905",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1279"
]
},
"num": null,
"urls": [],
"raw_text": "Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learn- ing. In ACL, pages 2895-2905, Florence, Italy. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Deriving machine attention from human rationales",
"authors": [
{
"first": "Yujia",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1903--1913",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1216"
]
},
"num": null,
"urls": [],
"raw_text": "Yujia Bao, Shiyu Chang, Mo Yu, and Regina Barzilay. 2018. Deriving machine attention from human ratio- nales. In EMNLP, pages 1903-1913, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Longformer: The long-document transformer",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. CoRR, abs/2004.05150.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "NeurIPS",
"authors": [
{
"first": "Tom",
"middle": [
"B"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Askell",
"suffix": ""
},
{
"first": "Sandhini",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Ariel",
"middle": [],
"last": "Herbert-Voss",
"suffix": ""
},
{
"first": "Gretchen",
"middle": [],
"last": "Krueger",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Henighan",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Ramesh",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"M"
],
"last": "Ziegler",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Clemens",
"middle": [],
"last": "Winter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Hesse",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Sigler",
"suffix": ""
},
{
"first": "Mateusz",
"middle": [],
"last": "Litwin",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers. In NeurIPS.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "e-snli: Natural language inference with natural language explanations",
"authors": [
{
"first": "Oana-Maria",
"middle": [],
"last": "Camburu",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Lukasiewicz",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2018,
"venue": "NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oana-Maria Camburu, Tim Rockt\u00e4schel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Nat- ural language inference with natural language expla- nations. In NeurIPS 2018.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Guiding policies with language via meta-learning",
"authors": [
{
"first": "D",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Abhishek",
"middle": [],
"last": "Co-Reyes",
"suffix": ""
},
{
"first": "Suvansh",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Sanjeev",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Altieri",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Abbeel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2019,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D. Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, Jacob Andreas, John DeNero, Pieter Abbeel, and Sergey Levine. 2019. Guiding policies with language via meta-learning. In ICLR 2019.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Training classifiers with natural language explanations",
"authors": [
{
"first": "Braden",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Paroma",
"middle": [],
"last": "Varma",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Bringmann",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher R\u00e9. 2018. Training classifiers with natural language explanations. In ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Leakage-adjusted simulatability: Can models generate non-trivial explanations of their behavior in natural language?",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Hase",
"suffix": ""
},
{
"first": "Shiyue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Harry",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Hase, Shiyue Zhang, Harry Xie, and Mohit Bansal. 2020. Leakage-adjusted simulatability: Can models generate non-trivial explanations of their be- havior in natural language? In Findings of EMNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Billion-scale similarity search with gpus",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Matthijs",
"middle": [],
"last": "Douze",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Transactions on Big Data",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2017. Billion-scale similarity search with gpus. IEEE Transactions on Big Data.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dense passage retrieval for open-domain question answering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "6769--6781",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.550"
]
},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In EMNLP, pages 6769-6781, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Nile : Natural language inference with faithful natural language explanations",
"authors": [
{
"first": "Sawan",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sawan Kumar and Partha Talukdar. 2020. Nile : Natu- ral language inference with faithful natural language explanations. In ACL 2020.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Can language models learn from explanations in context?",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hill",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2204.02329"
]
},
"num": null,
"urls": [],
"raw_text": "Wang, and Felix Hill. 2022. Can language models learn from explanations in context? arXiv preprint arXiv:2204.02329.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Retrieval-augmented generation for knowledge-intensive NLP tasks",
"authors": [
{
"first": "S",
"middle": [
"H"
],
"last": "Patrick",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Aleksandra",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Piktus",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Heinrich",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "K\u00fcttler",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2020,
"venue": "NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik- tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rockt\u00e4schel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. In NeurIPS.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "ALICE: active learning with contrastive natural language explanations",
"authors": [
{
"first": "Weixin",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2020,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4380--4391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weixin Liang, James Zou, and Zhou Yu. 2020. ALICE: active learning with contrastive natural language ex- planations. In EMNLP, pages 4380-4391. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Can small and synthetic benchmarks drive modeling innovation? a retrospective study of question answering modeling approaches",
"authors": [
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nelson F. Liu, Tony Lee, Robin Jia, and Percy Liang. 2021. Can small and synthetic benchmarks drive modeling innovation? a retrospective study of ques- tion answering modeling approaches. CoRR.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. ArXiv, abs/1907.11692.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Predicting inductive biases of pretrained models",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Lovering",
"suffix": ""
},
{
"first": "Rohan",
"middle": [],
"last": "Jha",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Lovering, Rohan Jha, Tal Linzen, and Ellie Pavlick. 2021. Predicting inductive biases of pre- trained models.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Explanation in artificial intelligence: Insights from the social sciences",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2019,
"venue": "Artif. Intell",
"volume": "267",
"issue": "",
"pages": "1--38",
"other_ids": {
"DOI": [
"10.1016/j.artint.2018.07.007"
]
},
"num": null,
"urls": [],
"raw_text": "Tim Miller. 2019. Explanation in artificial intelli- gence: Insights from the social sciences. Artif. In- tell., 267:1-38.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Expbert: Representation engineering with natural language explanations",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Murty",
"suffix": ""
},
{
"first": "Pang",
"middle": [],
"last": "Wei Koh",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "2106--2113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikhar Murty, Pang Wei Koh, and Percy Liang. 2020. Expbert: Representation engineering with natural language explanations. In ACL, pages 2106-2113. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Noah Fiedel, and Karishma Malkan. 2020. WT5?! training text-to-text models",
"authors": [
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Katherine",
"middle": [
"J"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharan Narang, Colin Raffel, Katherine J. Lee, Adam Roberts, Noah Fiedel, and Karishma Malkan. 2020. WT5?! training text-to-text models to explain their predictions. ArXiv, abs/2004.14546.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Bakhtin",
"suffix": ""
},
{
"first": "Yuxiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2463--2473",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1250"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473, Hong Kong, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Evaluating explanations: How much do explanations from the teacher aid students? TACL",
"authors": [
{
"first": "Danish",
"middle": [],
"last": "Pruthi",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Livio Baldini",
"middle": [],
"last": "Soares",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danish Pruthi, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C. Lipton, Graham Neu- big, and William W. Cohen. 2021. Evaluating expla- nations: How much do explanations from the teacher aid students? TACL, abs/2012.00893.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Explain yourself! leveraging language models for commonsense reasoning",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nazneen Fatema Rajani",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense rea- soning. In ACL 2019.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In EMNLP-IJCNLP, pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Right for the right reasons: Training differentiable models by constraining their explanations",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Slavin Ross",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"C"
],
"last": "Hughes",
"suffix": ""
},
{
"first": "Finale",
"middle": [],
"last": "Doshi-Velez",
"suffix": ""
}
],
"year": 2017,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "2662--2670",
"other_ids": {
"DOI": [
"10.24963/ijcai.2017/371"
]
},
"num": null,
"urls": [],
"raw_text": "Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez. 2017. Right for the right reasons: Training differentiable models by constraining their explanations. In IJCAI, pages 2662-2670.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Guide me: Interacting with deep networks",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Rupprecht",
"suffix": ""
},
{
"first": "Iro",
"middle": [],
"last": "Laina",
"suffix": ""
},
{
"first": "Nassir",
"middle": [],
"last": "Navab",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"D"
],
"last": "Harger",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Tombari",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Rupprecht, Iro Laina, Nassir Navab, Gre- gory D. Harger, and Federico Tombari. 2018. Guide me: Interacting with deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition, CVPR 2018.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Taking a HINT: leveraging explanations to make vision and language models more grounded",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Ramprasaath Ramasamy Selvaraju",
"suffix": ""
},
{
"first": "Yilin",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Hongxia",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Shalini",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Larry",
"middle": [
"P"
],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Heck",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2019,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "2591--2600",
"other_ids": {
"DOI": [
"10.1109/ICCV.2019.00268"
]
},
"num": null,
"urls": [],
"raw_text": "Ramprasaath Ramasamy Selvaraju, Stefan Lee, Yilin Shen, Hongxia Jin, Shalini Ghosh, Larry P. Heck, Dhruv Batra, and Devi Parikh. 2019. Taking a HINT: leveraging explanations to make vision and language models more grounded. In ICCV, pages 2591-2600. IEEE.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The constrained weight space svm: learning with ranked features",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Small",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Byron",
"suffix": ""
},
{
"first": "Carla",
"middle": [
"E"
],
"last": "Wallace",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"A"
],
"last": "Brodley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Trikalinos",
"suffix": ""
}
],
"year": 2011,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "865--872",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Small, Byron C Wallace, Carla E Brodley, and Thomas A Trikalinos. 2011. The constrained weight space svm: learning with ranked features. In ICML, pages 865-872.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Learning classifiers from declarative language",
"authors": [
{
"first": "Shashank",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashank Srivastava, I. Labutov, and T. Mitchell. 2017. Learning classifiers from declarative language. In NeurIPS 2017.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Zero-shot learning of classifiers from natural language quantification",
"authors": [
{
"first": "Shashank",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1029"
]
},
"num": null,
"urls": [],
"raw_text": "Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2018. Zero-shot learning of classifiers from natural language quantification. In ACL 2018.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Supervising model attention with human explanations for robust natural language inference",
"authors": [
{
"first": "Joe",
"middle": [],
"last": "Stacey",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
}
],
"year": 2022,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe Stacey, Yonatan Belinkov, and Marek Rei. 2022. Supervising model attention with human explana- tions for robust natural language inference. In AAAI.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Right for the right concept: Revising neuro-symbolic concepts by interacting with their explanations. CoRR, abs",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Stammer",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Schramowski",
"suffix": ""
},
{
"first": "Kristian",
"middle": [],
"last": "Kersting",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Stammer, Patrick Schramowski, and Kristian Kersting. 2020. Right for the right concept: Re- vising neuro-symbolic concepts by interacting with their explanations. CoRR, abs/2011.12854.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Leap-of-thought: Teaching pre-trained models to systematically reason over implicit knowledge",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Talmor",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav Gold- berg, and Jonathan Berant. 2020. Leap-of-thought: Teaching pre-trained models to systematically rea- son over implicit knowledge. In NeurIPS 2020.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Does it make sense? and why? a pilot study for sense making and explanation",
"authors": [
{
"first": "Cunxiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shuailong",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaonan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tian",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiao- nan Li, and Tian Gao. 2019a. Does it make sense? and why? a pilot study for sense making and expla- nation. In ACL 2019.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Learning from explanations with neural execution tree",
"authors": [
{
"first": "Ziqi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yujia",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Wenxuan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Qinyuan",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Leonardo",
"middle": [],
"last": "Neves",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2019,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziqi Wang, Yujia Qin, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, and Xi- ang Ren. 2019b. Learning from explanations with neural execution tree. In ICLR.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Measuring association between labels and free-text rationales",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Ana",
"middle": [],
"last": "Marasovic",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Wiegreffe, Ana Marasovic, and Noah A. Smith. 2020. Measuring association between labels and free-text rationales. CoRR, abs/2010.12762.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "An example of our synthetic task."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "For every data point x, there is an explanation e = (index, m, n, r, d) where the label reason is given by either (m, n) or (r, d). Whether the label reason is the (m, n) integer pair or the (r, d) pair is dictated by the indicator. As represented in Fig. 3, (a, b) = (m, n) if the indicator is 1 and (a, b) = (r, d) if the indicator is 2."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "(RQ2) Synthetic task accuracy by the conditioning mechanism and retrieval model status, for data with num-tasks = 500.[ new 10x train baseline -PH ]"
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "(RQ3) Synthetic task accuracy with evidential and recomposable explanations, grouped by the conditioning mechanism and status of retrieval model. [ shud we be mentioning the error bars once somewhere in caption/main text?[ added model selection and hypothesis testing section in Experimental Setup -PH ] -MB ]"
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Synthetic task accuracy for our baseline and retrieval model with two conditioning mechanisms, H-MEAN and TEXTCAT."
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Synthetic task accuracy as a function of the number of unique explanations for data point labels."
},
"FIGREF6": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "-MEAN. By H-MEAN, we refer to the kind of unweighted hidden representation averaging used in Co-Reyes et al. (2019) and Zhou et al. (2020). H-MEAN works by first obtaining representations of the input x and a single explanation e at a time, then passing the unweighted average of these representations to an MLP. For a fair comparison with TEXTCAT, we use the same token pooling and a 1-layer MLP. So with C explanations to condition on, x = concatenate(x, e), and vector representations from RoBERTa(x ), H-MEAN obtains a sin-gle representation as"
},
"FIGREF7": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Sample {index t } num-tasks \u03c4 =1from the uniform distribution over integers {1,...,10000} without replacement.2. Sample{(m, n, r, d) t } num-tasks \u03c4 =1from the uniform distribution over integers, unif ([1, 100] 4 ), without replacement and requiring that m = n = r = d.3. Define the set {(index, m, n, r, d) index )} for index and (m, n, r, d) drawn from their respective sets, without replacement, in an arbitrary order."
},
"TABREF0": {
"type_str": "table",
"html": null,
"text": "Using Explanations as Targets. Explanations are often used as additional supervision (shown",
"content": "<table><tr><td>Explanation as Target</td><td>Explanation as Prior</td><td/></tr><tr><td>Multi-Task</td><td>Data Augmentation</td><td>Regularizer or Hypernetwork</td></tr><tr><td>Explanation as Input</td><td/><td/></tr><tr><td>Global Set</td><td>Retrieval</td><td>Per Data Point Input</td></tr><tr><td>Structured Variable</td><td>Per Label Structured Variable</td><td>Few-shot In-context Learning</td></tr></table>",
"num": null
},
"TABREF2": {
"type_str": "table",
"html": null,
"text": "thank Miles Turpin and Ethan Perez for helpful discussion of the topics represented here, as well as Xiang Zhou, Prateek Yadav, and our anonymous reviewers for helpful feedback on the work. This work was supported by NSF-CAREER",
"content": "<table><tr><td>Award 1846185, DARPA Machine-Commonsense</td></tr><tr><td>(MCS) Grant N66001-19-2-4031, a Google PhD</td></tr><tr><td>Fellowship, Microsoft Investigator Fellowship, and</td></tr><tr><td>Google and AWS cloud compute awards. The</td></tr><tr><td>views contained in this article are those of the au-</td></tr><tr><td>thors and not of the funding agency.</td></tr></table>",
"num": null
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "OmarZaidan, Jason Eisner, and Christine Piatko. 2007. Using \"Annotator Rationales\" to Improve Machine Learning for Text Categorization. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 260-267, Rochester, New York. Association for Computational Linguistics.",
"content": "<table><tr><td>Ye Zhang, Iain Marshall, and Byron C. Wallace. 2016. Rationale-Augmented Convolutional Neural Networks for Text Classification. In Proceedings of the 2016 Conference on Empirical Methods in Nat-ural Language Processing, pages 795-804, Austin, Texas. Association for Computational Linguistics.</td></tr><tr><td>Xinyan Zhao and VG Vydiswaran. 2021. Lirex: Aug-menting language inference with relevant explana-tion. In AAAI.</td></tr><tr><td>Wangchunshu Zhou, Jinyi Hu, Hanlin Zhang, Xiao-dan Liang, Maosong Sun, Chenyan Xiong, and Jian Tang. 2020. Towards interpretable natural language understanding with explanations as latent variables. In NeurIPS.</td></tr></table>",
"num": null
}
}
}
}