ACL-OCL / Base_JSON /prefixB /json /blackboxnlp /2020.blackboxnlp-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:09:02.016599Z"
},
"title": "Structured Self-Attention Weights Encode Semantics in Sentiment Analysis",
"authors": [
{
"first": "Zhengxuan",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": ""
},
{
"first": "Thanh-Son",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Desmond",
"middle": [
"C"
],
"last": "Ong",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Neural attention, especially the self-attention made popular by the Transformer, has become the workhorse of state-of-the-art natural language processing (NLP) models. Very recent work suggests that the self-attention in the Transformer encodes syntactic information; Here, we show that self-attention scores encode semantics by considering sentiment analysis tasks. In contrast to gradient-based feature attribution methods, we propose a simple and effective Layer-wise Attention Tracing (LAT) method to analyze structured attention weights. We apply our method to Transformer models trained on two tasks that have surface dissimilarities, but share common semanticssentiment analysis of movie reviews and timeseries valence prediction in life story narratives. Across both tasks, words with high aggregated attention weights were rich in emotional semantics, as quantitatively validated by an emotion lexicon labeled by human annotators. Our results show that structured attention weights encode rich semantics in sentiment analysis, and match human interpretations of semantics.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Neural attention, especially the self-attention made popular by the Transformer, has become the workhorse of state-of-the-art natural language processing (NLP) models. Very recent work suggests that the self-attention in the Transformer encodes syntactic information; Here, we show that self-attention scores encode semantics by considering sentiment analysis tasks. In contrast to gradient-based feature attribution methods, we propose a simple and effective Layer-wise Attention Tracing (LAT) method to analyze structured attention weights. We apply our method to Transformer models trained on two tasks that have surface dissimilarities, but share common semanticssentiment analysis of movie reviews and timeseries valence prediction in life story narratives. Across both tasks, words with high aggregated attention weights were rich in emotional semantics, as quantitatively validated by an emotion lexicon labeled by human annotators. Our results show that structured attention weights encode rich semantics in sentiment analysis, and match human interpretations of semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, variants of neural network attention mechanisms such as local attention (Bahdanau et al., 2015; Luong et al., 2015) and self-attention in the Transformer (Vaswani et al., 2017) have become the de facto go-to neural models for a variety of NLP tasks including machine translation (Luong et al., 2015; Vaswani et al., 2017) , syntactic parsing (Vinyals et al., 2015) , and language modeling (Liu and Lapata, 2018; Dai et al., 2019) .",
"cite_spans": [
{
"start": 89,
"end": 112,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF4"
},
{
"start": 113,
"end": 132,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 171,
"end": 193,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 296,
"end": 316,
"text": "(Luong et al., 2015;",
"ref_id": "BIBREF21"
},
{
"start": 317,
"end": 338,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 359,
"end": 381,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF34"
},
{
"start": 406,
"end": 428,
"text": "(Liu and Lapata, 2018;",
"ref_id": "BIBREF20"
},
{
"start": 429,
"end": 446,
"text": "Dai et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Attention has brought about increased performance gains, but what do these values 'mean'? Previous studies have visualized and shown how learnt attention contributes to decisions in tasks like natural language inference and aspect-level sentiment (Lin et al., 2017; Wang et al., 2016; Ghaeini et al., 2018) . Recent studies on the Transformer (Vaswani et al., 2017) have demonstrated that attention-based representations encode syntactic information (Tenney et al., 2019 ) such as anaphora (Voita et al., 2018; Goldberg, 2019) , Partsof-Speech (Vig and Belinkov, 2019) and dependencies (Raganato and Tiedemann, 2018; Hewitt and Manning, 2019; Clark et al., 2019) . Other researchers have also done very recent extensive analyses on self-attention, by, for example, implementing gradient-based Layer-wise Relevance Propagation (LRP) method on the Transformer (Voita et al., 2019) to study attributions of graident-scores to heads, or graph-based aggregation method to visualize attention flows (Abnar and Zuidema, 2020) . These very recent works have not looked at whether the structured attention weights themselves aggregate on tokens with strong semantic meaning in tasks such as sentiment analysis. Thus, it is still unclear if the attention on input words may actually encode semantic information relevant to the task.",
"cite_spans": [
{
"start": 247,
"end": 265,
"text": "(Lin et al., 2017;",
"ref_id": "BIBREF19"
},
{
"start": 266,
"end": 284,
"text": "Wang et al., 2016;",
"ref_id": "BIBREF37"
},
{
"start": 285,
"end": 306,
"text": "Ghaeini et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 343,
"end": 365,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 450,
"end": 470,
"text": "(Tenney et al., 2019",
"ref_id": "BIBREF30"
},
{
"start": 490,
"end": 510,
"text": "(Voita et al., 2018;",
"ref_id": "BIBREF35"
},
{
"start": 511,
"end": 526,
"text": "Goldberg, 2019)",
"ref_id": "BIBREF11"
},
{
"start": 544,
"end": 568,
"text": "(Vig and Belinkov, 2019)",
"ref_id": "BIBREF33"
},
{
"start": 586,
"end": 616,
"text": "(Raganato and Tiedemann, 2018;",
"ref_id": "BIBREF24"
},
{
"start": 617,
"end": 642,
"text": "Hewitt and Manning, 2019;",
"ref_id": "BIBREF12"
},
{
"start": 643,
"end": 662,
"text": "Clark et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 858,
"end": 878,
"text": "(Voita et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 993,
"end": 1018,
"text": "(Abnar and Zuidema, 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we were interested in extending previous studies on attention and syntax further, by probing the structured attention weights and studying whether these weights encode task-relevant semantic information. In contrast to gradient-based attribution methods (Voita et al., 2019) , we were explicitly interested in probing learnt attention weights rather than analyzing gradients. To do this, we propose a Layer-wise Attention Tracing (LAT) method to aggregate the structured attention weights learnt by self-attention layers onto input tokens. We show that these attention scores on input tokens correlate with an external measure of semantics across two tasks: a sentiment analysis task on a movie review dataset, and an emotion understanding task on a life stories narrative dataset. These tasks differ in structure (single-example classification vs. timeseries regression), and in domain (movie reviews vs. daily life events), but should share the same semantics, in that the same words should be important in both tasks. We propose a method of external validation of the semantics of these tasks, using emotion lexicons. We find evidence for the hypothesis that if self-attention mechanisms can learn emotion semantics, then LAT-calculated attention scores should be higher for words that have stronger emotional semantic meaning.",
"cite_spans": [
{
"start": 269,
"end": 289,
"text": "(Voita et al., 2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use an encoder-decoder architecture as shown in Fig. 1 . Our encoder is identical to the encoder of the Transformer (Vaswani et al., 2017) , with an additional local attention layer (Luong et al., 2015 ). Our decoder is task-specific: a simple Multilayer Perceptron (MLP) for the classification task, and a LSTM followed by a MLP for the time-series prediction task.",
"cite_spans": [
{
"start": 119,
"end": 141,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 185,
"end": 204,
"text": "(Luong et al., 2015",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 51,
"end": 57,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "Self-attention Layers. The encoder is identical to the original Transformer encoder and consists of a series of stacked self-attention layers. Each layer contains a multi-head self-attention layer, followed by an element-wise feed forward layer and residual connections. Following Vaswani et al. (2017) , we use = 6 stacked layers and = 8 heads, and a hidden dimension of = 512.",
"cite_spans": [
{
"start": 281,
"end": 302,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "We briefly recap the Transformer equations, to better illustrate our LAT method, which traces attention back through the layers. For a given selfattention layer \u2208 [1, ], we denote the input to using X \u2208 R \u00d7 , which represents tokens, each embedded using a -dimensional embedding. We keep the same input embedding size for all layers. The first layer takes as input the word tokens. A selfattention layer learns a set of Query, Key and Values matrices that are indexed by (i.e., weights are not shared across layers). Formally, these matrices are produced in parallel:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "Q = (X ), K = (X ), V = (X ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "where { , , } (\u2022) are each parameterized by a linear layer, and each matrix is of size \u00d7 . To enable multi-head attention, Q, K and V are partitioned into separate \u00d7 \u210e attention heads indexed by \u210e \u2208 [1, ], where \u210e = = 64.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "Each head learns a self-attention matrix s( ) \u210e using the scaled inner product of Q \u210e and K \u210e followed Code is available at https://github.com/ frankaging/LAT_for_Transformer Figure 1 : Attention-based encoder-decoder model architecture for classification task (left) and time-series task (right); The latter has a recurrent unit to generate predictions over time.",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 183,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "by a softmax operation. The self-attention matrix",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "s( ) \u210e is then multiplied by V \u210e to produce Z \u210e : s( ) \u210e = softmax Q \u210e K \u210e \u221a \u210e \u2208 R \u00d7 (2) Z \u210e = s( ) \u210e V \u210e \u2208 R \u00d7 \u210e (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "Next, we concatenate Z \u210e from each head \u210e to produce the output of layer (i.e., the input to layer + 1) X +1 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X +1 = ( [Z 1 , ..., Z ]) \u2208 R \u00d7",
"eq_num": "(4)"
}
],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "where (\u2022) is parameterized by two fully connected feed-forward layers (with 64 dimensions for the first layer then scaling back to -dimensions) with residual connections and layer normalization. X +1 is fed upwards to the next layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "Local Attention Layer. The output from the last self-attention layer X +1 is fed into a local attention layer. We then take a weighted sum over row vectors of the output, and produces a context vector using learnt local attention vector c :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c = softmax (X +1 ) \u2208 R (5) = (X +1 ) c = =1 c +1 \u2208 R",
"eq_num": "(6)"
}
],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "where (\u2022) is parameterized by a multi-layer perceptron (MLP) with two hidden layers that are 128-dimensional and 64-dimensional. The MLP layers are trained with dropout of = 0.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "Decoder. For the classification task, the context vector is fed into a decoder (\u2022) parameterized by a MLP to produce the output label. For the time-series task, context vectors from each time are fed into a LSTM (Hochreiter and Schmidhuber, 1997) layer with 300-dimensional hidden states before passing through a MLP. Both the MLP for the classification and the time-series tasks have the same 64-dimensional hidden space, and are trained with dropout of = 0.3. A complete model description can be found in the Appendix.",
"cite_spans": [
{
"start": 212,
"end": 246,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Model Architecture",
"sec_num": "2"
},
{
"text": "To study whether structured attention weights encode semantics, we propose a tracing method, Layerwise Attention Tracing (LAT), to trace the attention 'paid' to input tokens (i.e. words) through the selfattention layers in our encoder. LAT, illustrated in Fig. 2 , involves three main steps. First, starting from the local attention layer and a fixed \"quantity\" of attention, we distribute attention weights back to Z \u210e , the last self-attention layer of each head \u210e \u2208 [1, ]. Second, we trace the attention back through each self-attention layer \u2208 [1, ]. Third, from the first layer of each head, we trace the attention back onto each token in the input sequence, by accumulating attention scores from each head to the corresponding position. We do not consider the decoder in LAT, as the MLP and LSTM layers in the decoder do not modify attention. Furthermore, we specifically ignore the feedforward layers and residual connections in the encoder, as we were interested in the attention , not the neural activations they modify-this is our main differentiation from gradient-based or relevance-based work (Voita et al., 2019) , and we note another recent paper (Abnar and Zuidema, 2020) that made the same assumptions.",
"cite_spans": [
{
"start": 1106,
"end": 1126,
"text": "(Voita et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 1162,
"end": 1187,
"text": "(Abnar and Zuidema, 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 256,
"end": 262,
"text": "Fig. 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Layer-wise Attention Tracing",
"sec_num": "3"
},
{
"text": "Given an input sequence X of length tokens, the forward pass of the model (Eqn. 1-6) transforms X into the context vector . We consider how a fixed quantity of attention, A , gets divided back to the various heads. We refer to this quantity as the Attention Score that is accumulated down through the layers. From Eqn. 4 and Eqn. 6, we note that is a function of concatenated Z from the last self-attention layer, from each of the heads:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracing Local Attention.",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= =1 ( [ 1( ) , ..., ( ) ])",
"eq_num": "(7)"
}
],
"section": "Tracing Local Attention.",
"sec_num": null
},
{
"text": "where \u210e ( ) is the attended Value vector from head \u210e \u2208 [1, ] of the last layer at position \u2208 [1, ]. On the forward pass, the contribution of head \u210e at position , \u210e ( ) , is weighted by c ; Thus, on this first step of LAT, we divide the attention score A back to head \u210e at position , using c :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracing Local Attention.",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A +1 \u210e ( ) = A",
"eq_num": "(8)"
}
],
"section": "Tracing Local Attention.",
"sec_num": null
},
{
"text": "We use this notation to allude that this is the attention weights coming down from the \"( + 1)-th layer\", to follow the logic of the next step of LAT. Without loss of generality, we can set the initial attention score at the top, A , to be 1, then all subsequent attention scores can be interpreted as a proportion of the initial attention score. Note that in our attention tracing, we are interested in accumulating the attention A \u210e ( ) for each layer \u2208 [1, ] at each position , and so we focus on the attention weights (and not the hidden states that the attention multiplies, Z \u210e or V \u210e ), which remain unchanged through .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracing Local Attention.",
"sec_num": null
},
{
"text": "Tracing Self Attention. On the forward pass, Eqn. 3 applies the self-attention weights. We rewrite this equation to make the indices explicit:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracing Local Attention.",
"sec_num": null
},
{
"text": "\u210e ( ) = =1 ( ) \u210e ( \u2192 ) \u210e ( ) (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracing Local Attention.",
"sec_num": null
},
{
"text": "where \u210e ( ) denotes the -th row of V \u210e (i.e., corresponding to the token in position ), and ( ) \u210e ( \u2192 ) is the ( , ) element of s ( ) \u210e , such that it captures the attention from position to position . The attended values Z \u210e then undergo two sets of feed-forward layers: Eqn. 4 with to get X and Eqn. 1 with to get V +1 \u210e . Using A \u210e ( ) to denote the attention score accumulated at head \u210e, position , layer , we can trace the attention coming down from the next-higher layer based on Eqn. 9:",
"cite_spans": [
{
"start": 92,
"end": 95,
"text": "( )",
"ref_id": null
},
{
"start": 130,
"end": 133,
"text": "( )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tracing Local Attention.",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A \u210e ( ) = =1 ( ) \u210e ( \u2192 ) A +1 \u210e ( )",
"eq_num": "(10)"
}
],
"section": "Tracing Local Attention.",
"sec_num": null
},
{
"text": "To confirm our intuition, on the forward pass (see Eqn. 9 and Fig. 2) , to get the hidden value at position on the \"upper\" part of the layer, we sum ( ) \u210e ( \u2192 ) over (the indices of the \"lower\" layer). Thus, on the LAT pass downwards (Eqn. 10), to get A \u210e ( ) as position on the \"lower\" layer, we sum the corresponding ( ) \u210e ( \u2192 ) 's over . Tracing to input tokens. Finally, for each input token , we sum up the attention weights from each head at the corresponding position in the first layer to obtain the accumulated attention weights paid to token :",
"cite_spans": [
{
"start": 319,
"end": 322,
"text": "( )",
"ref_id": null
}
],
"ref_spans": [
{
"start": 62,
"end": 69,
"text": "Fig. 2)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Tracing Local Attention.",
"sec_num": null
},
{
"text": "A = \u210e=1 A 1 \u210e ( ) (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracing Local Attention.",
"sec_num": null
},
{
"text": "In summary, Eqns. 8, 10, and 11 describe the LAT method for tracing through the local and selfattention layers back to the input tokens .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracing Local Attention.",
"sec_num": null
},
{
"text": "There has been extensive debate over what attention mechanisms learn. On the one hand, researchers have developed methods to probe learnt self-attention in Transformer-based models, and show that attention scores learnt by models like BERT encode syntactic information like Partsof-Speech (Vig and Belinkov, 2019) , dependencies (Hewitt and Manning, 2019; Raganato and Tiedemann, 2018) , anaphora (Goldberg, 2019; Voita et al., 2018) and other parts of the traditional NLP pipeline (Tenney et al., 2019) . These studies collectively suggest that self-attention mechanisms learn to encode syntactic information, which led us to propose the current work on whether selfattention can similarly learn to encode semantics.",
"cite_spans": [
{
"start": 289,
"end": 313,
"text": "(Vig and Belinkov, 2019)",
"ref_id": "BIBREF33"
},
{
"start": 329,
"end": 355,
"text": "(Hewitt and Manning, 2019;",
"ref_id": "BIBREF12"
},
{
"start": 356,
"end": 385,
"text": "Raganato and Tiedemann, 2018)",
"ref_id": "BIBREF24"
},
{
"start": 397,
"end": 413,
"text": "(Goldberg, 2019;",
"ref_id": "BIBREF11"
},
{
"start": 414,
"end": 433,
"text": "Voita et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 482,
"end": 503,
"text": "(Tenney et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "On the other hand, there are also other papers questioning the interpretations the field has placed on attention. These researchers show that attention weights have a low correlation with gradient-based measures of importance (Jain and Wallace, 2019; Serrano and Smith, 2019; Vashishth et al., 2019) . More recent analysis suggest that in certain regimes for the Transformer (i.e., sequence length greater than attention head dimension ), attention distributions are non-identifiable, posing problems for interpretability (Brunner et al., 2020) . In our work, we provide a method that can trace attention scores in Transformers to the input tokens, and show with both qualitative and quantitative evidence that these scores are semantically meaningful.",
"cite_spans": [
{
"start": 226,
"end": 250,
"text": "(Jain and Wallace, 2019;",
"ref_id": "BIBREF15"
},
{
"start": 251,
"end": 275,
"text": "Serrano and Smith, 2019;",
"ref_id": "BIBREF25"
},
{
"start": 276,
"end": 299,
"text": "Vashishth et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 522,
"end": 544,
"text": "(Brunner et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Beyond attention-based studies, there have been numerous studies that proposed gradient-based attribution analyses (Dimopoulos et al., 1995; Gevrey et al., 2003; Simonyan et al., 2013) and layer-wise relevance propagation (Bach et al., 2015; Arras et al., 2017) . Most related to the current work is Voita et al. (2019) , who extended layer-wise relevance propagation to the Transformer to examine the contribution of individual heads to the final decision. In parallel, Abnar and Zuidema (2020) recently proposed a method to roll-out structured attention weights inside the Transformer model, which is similar to our LAT method we propose here, although we provided more analysis via an external validation using external knowledge. We sought to investigate the attention accumulated onto individual input tokens using attention tracing, in a more similar manner to Vig and Belinkov (2019) for syntax or how Voita et al. (2018) looked at the attention paid to other words. We also calculate a gradient-based score (see Eqn. 13) to contrast our attention results with, and though these two scores are correlated (see Footnote 6), they behave differently in our analyses.",
"cite_spans": [
{
"start": 115,
"end": 140,
"text": "(Dimopoulos et al., 1995;",
"ref_id": "BIBREF8"
},
{
"start": 141,
"end": 161,
"text": "Gevrey et al., 2003;",
"ref_id": "BIBREF9"
},
{
"start": 162,
"end": 184,
"text": "Simonyan et al., 2013)",
"ref_id": "BIBREF27"
},
{
"start": 222,
"end": 241,
"text": "(Bach et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 242,
"end": 261,
"text": "Arras et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 300,
"end": 319,
"text": "Voita et al. (2019)",
"ref_id": "BIBREF36"
},
{
"start": 471,
"end": 495,
"text": "Abnar and Zuidema (2020)",
"ref_id": "BIBREF0"
},
{
"start": 867,
"end": 890,
"text": "Vig and Belinkov (2019)",
"ref_id": "BIBREF33"
},
{
"start": 909,
"end": 928,
"text": "Voita et al. (2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "We note that our models do not fall into this regime.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "SST-5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Accuracy (SD runs )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "RNTN (Socher et al., 2013) 45.7 (-) BiLSTM (Tai et al., 2015) 46.5 (-)",
"cite_spans": [
{
"start": 5,
"end": 26,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF28"
},
{
"start": 43,
"end": 61,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Transformer + Position encoding (Ambartsoumian and Popowich, 2018) 45.0 (0.4)",
"cite_spans": [
{
"start": 32,
"end": 66,
"text": "(Ambartsoumian and Popowich, 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "DiSAN (Shen et al., 2018) 51.7 (-) Our Self-attention 47.5 (0.2) SEND CCC (SD runs , SD eg )",
"cite_spans": [
{
"start": 6,
"end": 25,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "LSTM .40 (-, .32) SFT .34 (-, .33) Human .50 (-, .12) Our Self-attention + LSTM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": ".54 (.02, .36) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "To show that our interpretation methods can generalize across different types of datasets, we apply our method to two tasks with different characteristics, namely, sentiment classification of movie reviews on the Stanford Sentiment Treebank (SST), and time-series valence regression over long sequences narrative stories on the Stanford Emotional Narratives Dataset (SEND).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "We used the fine-grained (5-class) version of the Stanford Sentiment Treebank (SST-5) movie review dataset (Socher et al., 2013) , which has been used in previous studies of interpretability of neural network models Arras et al., 2017) . All sentences were tokenized, and preprocessed by lowercasing, similar to . We embed each token using 300-dimensional GloVe word embeddings (Pennington et al., 2014) . Each sentence is labeled via crowdsourcing with one of five sentiment classes {Very Negative, Negative, Neutral, Positive, and Very Positive}. We used",
"cite_spans": [
{
"start": 107,
"end": 128,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF28"
},
{
"start": 216,
"end": 235,
"text": "Arras et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 378,
"end": 403,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stanford Sentiment Treebank",
"sec_num": "5.1"
},
{
"text": "Although the SST contains labels on each parse tree of the reviews, we only considered full sentences. the same dataset partitions as in the original paper: a Train set (8544 sentences, average length 19 tokens), a Validation set (1101 sentences, average length 19 tokens) and a Test set (2210 sentences, average length 19 tokens). Models are trained to maximize the 5-class classification accuracy by minimizing multi-class cross-entropy loss. We compare our model with previous works on SST that are based on LSTM (Tai et al., 2015) and Transformer (Ambartsoumian and Popowich, 2018; Shen et al., 2018) .",
"cite_spans": [
{
"start": 516,
"end": 534,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF29"
},
{
"start": 551,
"end": 585,
"text": "(Ambartsoumian and Popowich, 2018;",
"ref_id": "BIBREF1"
},
{
"start": 586,
"end": 604,
"text": "Shen et al., 2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stanford Sentiment Treebank",
"sec_num": "5.1"
},
{
"text": "The SEND comprises videos of participants narrating emotional life events. Each video is professionally transcribed, and annotated via crowdsourcing with emotion valence scores ranging from \"Very Negative\" [-1] to \"Very Positive\" [1] continuously sampled at every 0.5s. Details can be found on the authors' GitHub repository. The SEND has previously been used to train deep learning models to predict emotion valence over time .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stanford Emotional Narratives Dataset",
"sec_num": "5.2"
},
{
"text": "The SEND has 193 transcripts, and each one contains multiple sentences. We preprocess them by tokenizing and lowercasing as in . Additionally, we divide each transcript into 5-second time windows by using timestamps provided in the dataset. We use the average valence scores during a time window as the label of that window. We use the same partitions as in the original paper: a Train set (114 transcripts, average length 357 tokens, average window length 13 tokens), a Validation set (40 transcripts, average length 387 tokens, average window length 15 tokens) and a Test set (39 transcripts, average length 333 tokens, average window length 13 tokens). We embed each token in the same way as for SST-5. As in the original papers , we use the Concordance Correlation Coefficient (CCC (Lin, 1989) ) as our evaluation metric (See Appendix for the definiton). We compare our model with previous works on SEND that use LSTM and Transformer .",
"cite_spans": [
{
"start": 786,
"end": 797,
"text": "(Lin, 1989)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stanford Emotional Narratives Dataset",
"sec_num": "5.2"
},
{
"text": "We report the results of our Transformer-based models in Table 1 with performances of state-of-the-art (SOTA) models trained with these two datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 64,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Model training and results",
"sec_num": "6.1"
},
{
"text": "We selected models in the literature that are the most representative and relevant to our models. Our Transformer-based model for the SST-5 classification task (Fig. 1) achieves good performance, with an accuracy (\u00b1 standard deviation) of 47.5% \u00b1 49.9% on the five-class sentiment classification. For the SEND dataset, our model outperforms previous SOTA models and even average human performance on this task, with a mean CCC of .54 \u00b1 .36 on the Test set. Interestingly, our window-based Transformer encoder increases performance compared to the Simple Fusion Transformer proposed by , who used a Transformer-based encoder over the whole narrative sequence.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "(Fig. 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model training and results",
"sec_num": "6.1"
},
{
"text": "Both models are trained with the Adam (Kingma and Ba, 2015) optimization algorithm with a learning rate of 10 \u22124 . As our goal was analyzing structured attention weights, not maximizing performance, we manually specified hyperparameters without any grid search. We include details about our experiment setup in the Appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model training and results",
"sec_num": "6.1"
},
{
"text": "Given that our Transformer-based models achieved comparable state-of-the-art performance on the SST and SEND, we then proceed to analyze the attention scores produced by LAT on these models. After computing A for all the words in a given sequence, we normalize attention scores using the softmax function to have them sum to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model training and results",
"sec_num": "6.1"
},
{
"text": "The flow diagram in Fig. 3 visualizes how attention aggregates using LAT across all heads and layers for the model trained with SST-5 for an example input. Rows represents self-attention layers and columns represent attention heads. Dots represent different tokens at head \u210e \u2208 [1, ] (left to right), position \u2208 [1, ] of layer \u2208 [1, ] (bottom to top). Dots in the bottom-most layer represents input tokens. The darker the color of each dot, the higher the accumulated attention score at that position, calculated using by Eqns. 8, 10 and 11. Attention weights ( ) \u210e ( \u2192 ) in each layer are illustrated by lines connecting tokens in consecutive layers.",
"cite_spans": [
{
"start": 559,
"end": 562,
"text": "( )",
"ref_id": null
}
],
"ref_spans": [
{
"start": 20,
"end": 26,
"text": "Fig. 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Visualizing Layerwise Attention Tracing",
"sec_num": "6.2"
},
{
"text": "This diagram illustrates some coarse-grained differences between heads. For example, all heads in the top last layer distributed attention fairly equally across all tokens. Other heads (e.g., Head 6, Layer 4, and Head 8, Layer 3) have a downward-triangle pattern, where attention weights are accumulated to a specific token in a lower-layer, while others (e.g. Head 5, Layer 1) seem to re-distribute accumulated attention more broadly. Finally, at the input layer, we note that attention scores seem to be highest for words with strong emotion semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visualizing Layerwise Attention Tracing",
"sec_num": "6.2"
},
{
"text": "To validate that the attention weights aggregated on the input tokens by LAT is semantically meaningful, we rank all unique word-level tokens in the Test set by their averaged attention scores received from all sequences that they appear. Concretely, we first use LAT to trace attention weights paid to input tokens for every sequences in the Test set. For tokens that appear more than once, we average their attention scores across occurrences. We then rank tokens by their average attention score, and illustrate in Fig. 4 using word clouds where a larger font size maps to a higher average attention score. For both datasets, we observe that words expressing strong emotions also have higher attention scores, see e.g. sorry, painful, unsatisfying for SST-5, and congratulations, freaking, comfortable for SEND. We note that stop words do not receive high attention scores in either of the datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 518,
"end": 524,
"text": "Fig. 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Sentiment Representations of Words",
"sec_num": "6.3"
},
{
"text": "One advantage of extracting emotion semantics from natural language text is that the field has amassed large, annotated references of emotion semantics. We refer, of course, to the emotion lexicons that earlier NLP researchers used for sentiment analysis and related tasks (Hu and Liu, 2004) . Although they seem to have fallen out of favor with the rise of deep learning (and the hypothesis that deep learning can learn such knowledge in a data-driven manner), in our task, we sought to use emotion lexicons as an external validation of what our model learns. We used a lexicon (Warriner et al., 2013) of nearly 14,000 English lemmas that are each annotated by an average of 20 volunteers for emotional valence, which corresponds exactly to the semantics in our tasks. The mean valence ratings in this lexicon are real-valued numbers from 1 to 9.",
"cite_spans": [
{
"start": 273,
"end": 291,
"text": "(Hu and Liu, 2004)",
"ref_id": "BIBREF14"
},
{
"start": 579,
"end": 602,
"text": "(Warriner et al., 2013)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative validation with an emotion lexicon",
"sec_num": "6.4"
},
{
"text": "We hypothesize that our LAT method produce attention scores such that words having higher scores will tend to have greater emotional meaning. Additionally, since our attention scores do not differentiate emotion \"directions\" (i.e., negative and positive), these attention scores should be high for both very positive words, as well as very negative words. Thus, we expect a U-shaped relationship between our attention scores and the lexicon's valence ratings. We examine this hypothesis by fitting a quadratic regression equation :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative validation with an emotion lexicon",
"sec_num": "6.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A = 0 + 1 Val + 2 [Val ] 2 +",
"eq_num": "(12)"
}
],
"section": "Quantitative validation with an emotion lexicon",
"sec_num": "6.4"
},
{
"text": "where A is the averaged attention score of a particular word derived by the LAT method, and Val represents the valence rating of that word from the Warriner et al. (2013) lexicon. We hypothesized a statistically-significant coefficient 2 on the quadratic term. To contrast our attention score with another measure of importance, the gradient, i.e., how important the inputs are to affecting the output , we also calculate a gradient score on each token by computing squared partial derivatives:",
"cite_spans": [
{
"start": 148,
"end": 170,
"text": "Warriner et al. (2013)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative validation with an emotion lexicon",
"sec_num": "6.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "G ( ) = ( ( ) ( )) 2",
"eq_num": "(13)"
}
],
"section": "Quantitative validation with an emotion lexicon",
"sec_num": "6.4"
},
{
"text": "where can be parameterized by neural networks, and G ( ) is the gradient of a particular space dimension of the embedding for the input token . We then regress G on the lexicon valence ratings using Eqn. 12.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative validation with an emotion lexicon",
"sec_num": "6.4"
},
{
"text": "We plot both our attention scores and gradient scores for each word against Warriner et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative validation with an emotion lexicon",
"sec_num": "6.4"
},
{
"text": "Specifically, we used the following formula in R syntax: lm(att \u223c poly(val,2)), where poly() creates orthogonal polynomials to avoid collinearity issues. (2013) valence ratings, in Fig. 5 . For both tasks, we considered only words that appeared in both our Test sets and the lexicon, and plot only scores below 0.4 to make the plot more readable . We can see clearly that there exist a U-shaped, quadratic relationship between attention scores and the Warriner valence ratings ( 2 = 0.283, = 0.040, = 7.04, < .001 for SST-5; 2 = 0.242, = 0.039, = 6.21, < .001 for SEND). Our results support our hypothesis that the attention scores recovered by our LAT method do track emotional semantics. As a result, we show that structured attention weights may encode semantics independent of other types of connections in the model (e.g., linear feedforward layers and residual layers.). By contrast, there is no clear quadratic relationship between gradient scores and valence ratings across both tasks (SST-5, = 0.19; SEND, = 0.28) .",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 187,
"text": "Fig. 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Quantitative validation with an emotion lexicon",
"sec_num": "6.4"
},
{
"text": "We next analyze the amount of attention paid to sentiment words in each head. Within each head \u210e, we analyze the proportion of accumulated attention A 1 \u210e ( ) on emotional words, specifically focusing on very positive and very negative words , aggregated This plotting rule only filtered out less than 1% of words in the Test sets: .171% for SST-5 and .754% for SEND.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Head Attention on Sentiment Words",
"sec_num": "6.5"
},
{
"text": "On the SST, A and G ( ) are correlated at = .80, and on the SEND, = .37. The two values are highly correlated (on the SST), but vary differently with respect to valence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Head Attention on Sentiment Words",
"sec_num": "6.5"
},
{
"text": "For SST-5, we used the original word-level very positive and very negative labels in the dataset. For SEND, we used the Warriner lexicon and chose a cutoff \u2265 6.5 for very positive, and < 3.5 for very negative. over the Test sets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Head Attention on Sentiment Words",
"sec_num": "6.5"
},
{
"text": "A 1 \u210e (tag) = X \u2208X |X | =1 (A 1 \u210e ( ) )1 label( )=tag X \u2208X |X | =1 (A 1 \u210e ( ) ) (14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Head Attention on Sentiment Words",
"sec_num": "6.5"
},
{
"text": "where X is the subset of sequences that contain at least 1 word with the selected tag . Fig. 6 shows the proportion of attention accumulated by heads to very positive and very negative words, compared with chance. All heads do seem to pay more attention to strongly emotional words, compared to chance, and some heads seem to 'specialize' more: For example, Head 4 in our SEND model pays 24% of its accumulated attention to very negative words while the mean of all other heads is closer to 15%. While Fig. 6 is specific to the model we trained, it is illustrative that specialization to strong emotional semantics does emerge from the learnt attention weights.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 94,
"text": "Fig. 6",
"ref_id": "FIGREF4"
},
{
"start": 502,
"end": 508,
"text": "Fig. 6",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Head Attention on Sentiment Words",
"sec_num": "6.5"
},
{
"text": "In this work, we analyzed whether structured attention weights encode semantics in sentiment analysis tasks, using our proposed probing method LAT to trace attention through multiple layers in the Transformer. We demonstrated that the accumulated attention scores tended to favor words with greater semantic meaning, in this case, emotional meaning. We applied LAT to two tasks having similar semantics, and show that our results generalize across both tasks/domains. We validated our results quantitatively with an emotion lexicon, and showed that our attention scores are highest for both highly positive and highly negative words-our a priori hypothesis for the quadratic, \"U-shaped\" relationship. We also found some evidence for specialization of heads to emotional meaning. Although it may seem that our attention tracing is \"incomplete\" as it does not take into account the feed-forward layers and residual connections, by contrast, this quadratic relationship was not shown by pure gradient-based importance, which suggests that there may be some utility to looking only at attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "We believe that attention in its various forms (Luong et al., 2015; Vaswani et al., 2017) are not only effective for performance, but may also provide That is, when calculating A 1 \u210e (very positive), we exclude sequences that do not contain at least 1 very positive word. interpretable explanations of model behaviour. It may not happen with today's implementations; we may need to engineer inductive biases to constrain attention mechanisms in order to address issues of identifiability that Jain and Wallace (2019) and others have pointed out. And perhaps, attention should not be interpreted like gradient-based measures (see Fig. 5 ). This debate is not yet resolved, and we hope our contributions will be useful in informing future work on this topic.",
"cite_spans": [
{
"start": 47,
"end": 67,
"text": "(Luong et al., 2015;",
"ref_id": "BIBREF21"
},
{
"start": 68,
"end": 89,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 629,
"end": 635,
"text": "Fig. 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Quantifying attention flow in transformers",
"authors": [
{
"first": "Samira",
"middle": [],
"last": "Abnar",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4190--4197",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.385"
]
},
"num": null,
"urls": [],
"raw_text": "Samira Abnar and Willem Zuidema. 2020. Quantify- ing attention flow in transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4190-4197, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Self-attention: A better building block for sentiment analysis neural network classifiers",
"authors": [
{
"first": "Artaches",
"middle": [],
"last": "Ambartsoumian",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Popowich",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "130--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artaches Ambartsoumian and Fred Popowich. 2018. Self-attention: A better building block for sentiment analysis neural network classifiers. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 130-139.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Explaining recurrent neural network predictions in sentiment analysis",
"authors": [
{
"first": "Leila",
"middle": [],
"last": "Arras",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "159--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leila Arras, Gr\u00e9goire Montavon, Klaus-Robert M\u00fcller, and Wojciech Samek. 2017. Explaining recurrent neural network predictions in sentiment analysis. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Me- dia Analysis, pages 159-168.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Binder",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Frederick",
"middle": [],
"last": "Klauschen",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
}
],
"year": 2015,
"venue": "PloS one",
"volume": "",
"issue": "7",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Bach, Alexander Binder, Gr\u00e9goire Montavon, Frederick Klauschen, Klaus-Robert M\u00fcller, and Wo- jciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise rele- vance propagation. PloS one, 10(7).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 4th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 4th International Conference on Learning Repre- sentations (ICLR).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "On identifiability in transformers",
"authors": [
{
"first": "Gino",
"middle": [],
"last": "Brunner",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Damian",
"middle": [
"Pascual"
],
"last": "Ortiz",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Richter",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Wattenhofer",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gino Brunner, Yang Liu, Damian Pascual Ortiz, Oliver Richter, Massimiliano Ciaramita, and Roger Watten- hofer. 2020. On identifiability in transformers. In International Conference on Learning Representa- tions(ICLR).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "What does BERT look at? An analysis of BERT's attention",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackBoxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "276--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does BERT look at? An analysis of BERT's attention. In Pro- ceedings of the 2019 ACL Workshop BlackBoxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Transformer-xl: Attentive language models beyond a fixed-length context",
"authors": [
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Jaime",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2978--2988",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Car- bonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 2978-2988.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Use of some sensitivity criteria for choosing networks with good generalization ability",
"authors": [
{
"first": "Yannis",
"middle": [],
"last": "Dimopoulos",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Bourret",
"suffix": ""
},
{
"first": "Sovan",
"middle": [],
"last": "Lek",
"suffix": ""
}
],
"year": 1995,
"venue": "Neural Processing Letters",
"volume": "2",
"issue": "6",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yannis Dimopoulos, Paul Bourret, and Sovan Lek. 1995. Use of some sensitivity criteria for choosing net- works with good generalization ability. Neural Pro- cessing Letters, 2(6):1-4.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Review and comparison of methods to study the contribution of variables in artificial neural network models",
"authors": [
{
"first": "Muriel",
"middle": [],
"last": "Gevrey",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Dimopoulos",
"suffix": ""
},
{
"first": "Sovan",
"middle": [],
"last": "Lek",
"suffix": ""
}
],
"year": 2003,
"venue": "Ecological modelling",
"volume": "160",
"issue": "3",
"pages": "249--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muriel Gevrey, Ioannis Dimopoulos, and Sovan Lek. 2003. Review and comparison of methods to study the contribution of variables in artificial neural net- work models. Ecological modelling, 160(3):249- 264.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Interpreting recurrent and attention-based neural models: A case study on natural language inference",
"authors": [
{
"first": "Reza",
"middle": [],
"last": "Ghaeini",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Xiaoli",
"suffix": ""
},
{
"first": "Prasad",
"middle": [],
"last": "Fern",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tadepalli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4952--4957",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reza Ghaeini, Xiaoli Z Fern, and Prasad Tadepalli. 2018. Interpreting recurrent and attention-based neu- ral models: A case study on natural language infer- ence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4952-4957.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Assessing BERT's syntactic abilities",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.05287"
]
},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg. 2019. Assessing BERT's syntactic abil- ities. arXiv preprint arXiv:1901.05287.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A structural probe for finding syntax in word representations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4129--4138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word represen- tations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4129-4138.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowl- edge discovery and Data Mining, pages 168-177.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Attention is not explanation",
"authors": [
{
"first": "Sarthak",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Byron",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "3543--3556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 3543-3556.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Visualizing and understanding neural models in NLP",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A concordance correlation coefficient to evaluate reproducibility",
"authors": [
{
"first": "I-Kuei",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1989,
"venue": "Biometrics",
"volume": "",
"issue": "",
"pages": "255--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence I-Kuei Lin. 1989. A concordance correlation coefficient to evaluate reproducibility. Biometrics, pages 255-268.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A structured self-attentive sentence embedding",
"authors": [
{
"first": "Zhouhan",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Minwei",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Cicero",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhouhan Lin, Minwei Feng, Cicero Nogueira dos San- tos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In Proceedings of the 6th International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning structured text representations",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "63--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu and Mirella Lapata. 2018. Learning struc- tured text representations. Transactions of the Asso- ciation for Computational Linguistics, 6:63-75.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Modeling emotion in complex stories: the Stanford Emotional Narratives Dataset. IEEE Transactions on Affective Computing",
"authors": [
{
"first": "Desmond",
"middle": [],
"last": "Ong",
"suffix": ""
},
{
"first": "Zhengxuan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhi-Xuan",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Marianne",
"middle": [],
"last": "Reddan",
"suffix": ""
},
{
"first": "Isabella",
"middle": [],
"last": "Kahhale",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Mattek",
"suffix": ""
},
{
"first": "Jamil",
"middle": [],
"last": "Zaki",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Desmond Ong, Zhengxuan Wu, Zhi-Xuan Tan, Mari- anne Reddan, Isabella Kahhale, Alison Mattek, and Jamil Zaki. 2019. Modeling emotion in complex stories: the Stanford Emotional Narratives Dataset. IEEE Transactions on Affective Computing.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing, pages 1532-1543.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "An analysis of encoder representations in Transformerbased machine translation",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Raganato",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "287--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Raganato and J\u00f6rg Tiedemann. 2018. An analysis of encoder representations in Transformer- based machine translation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 287-297.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Is attention interpretable?",
"authors": [
{
"first": "Sofia",
"middle": [],
"last": "Serrano",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2931--2951",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sofia Serrano and Noah A Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2931-2951.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "DiSAN: Directional Self-Attention Network for RNN/CNNfree language understanding",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Tianyi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Shirui",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Chengqi",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018. DiSAN: Directional Self-Attention Network for RNN/CNN- free language understanding. In Thirty-Second AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Vedaldi",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.6034"
]
},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan, Andrea Vedaldi, and Andrew Zisser- man. 2013. Deep inside convolutional networks: Vi- sualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improved semantic representations from tree-structured long short-term memory networks",
"authors": [
{
"first": "Kai Sheng",
"middle": [],
"last": "Tai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2358--2367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2358-2367.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bert rediscovers the classical nlp pipeline",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4593--4601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593-4601.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Attention interpretability across NLP tasks",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Vashishth",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Singh Tomar",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11218"
]
},
"num": null,
"urls": [],
"raw_text": "Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, and Manaal Faruqui. 2019. Attention in- terpretability across NLP tasks. arXiv preprint arXiv:1909.11218.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Analyzing the structure of attention in a transformer language model",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Vig",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "63--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Vig and Yonatan Belinkov. 2019. Analyzing the structure of attention in a transformer language model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 63-76.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Grammar as a foreign language",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2773--2781",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, \u0141ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Gram- mar as a foreign language. In Advances in Neural Information Processing Systems, pages 2773-2781.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Context-aware neural machine translation learns anaphora resolution",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Serdyukov",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1264--1274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine transla- tion learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics, pages 1264-1274.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Talbot",
"suffix": ""
},
{
"first": "Fedor",
"middle": [],
"last": "Moiseev",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5797--5808",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1580"
]
},
"num": null,
"urls": [],
"raw_text": "Elena Voita, David Talbot, Fedor Moiseev, Rico Sen- nrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 5797-5808, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Attention-based LSTM for aspectlevel sentiment classification",
"authors": [
{
"first": "Yequan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "606--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspect- level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 606-615.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Norms of valence, arousal, and dominance for 13,915 English lemmas. Behavior Research",
"authors": [
{
"first": "Amy",
"middle": [
"Beth"
],
"last": "Warriner",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Kuperman",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brysbaert",
"suffix": ""
}
],
"year": 2013,
"venue": "Methods",
"volume": "45",
"issue": "4",
"pages": "1191--1207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amy Beth Warriner, Victor Kuperman, and Marc Brys- baert. 2013. Norms of valence, arousal, and dom- inance for 13,915 English lemmas. Behavior Re- search Methods, 45(4):1191-1207.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Attending to emotional narratives",
"authors": [
{
"first": "Zhengxuan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xiyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tan",
"middle": [],
"last": "Zhi-Xuan",
"suffix": ""
},
{
"first": "Jamil",
"middle": [],
"last": "Zaki",
"suffix": ""
},
{
"first": "Desmond C",
"middle": [],
"last": "Ong",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII)",
"volume": "",
"issue": "",
"pages": "648--654",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengxuan Wu, Xiyu Zhang, Tan Zhi-Xuan, Jamil Zaki, and Desmond C Ong. 2019. Attending to emotional narratives. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), pages 648-654. IEEE.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "An illustration of the Layer-wise Attention Tracing (LAT) method with an example forward pass through head \u210e. Left: On the forward pass, learnt attention weights are represented by lines producing Z \u210e from values V \u210e via self-attention (Eqn. 2-3) and the context vector from the last layer via local attention (Eqn. 5). Dashed circles represents multiple heads, and vertical columns represent MLP transformations, which do not redistribution attention. Right: LAT on a 'backward pass'. The thickness of the edges represents accumulating attention. Attention from incoming edges are accumulated at each position in each layer, as in Eqn. 10. Darker colors maps to greater accumulated attention scores. In this example, the input token \"bad\" receives the highest attention score.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "An example flow diagram of attention distributed through self-attention layers in action. On the bottom, the font weights illustrate the accumulated attention weights paid to a particular word. The predicted label and the true label are both positive. Note that the color of the dots represent the attention weights A \u210e ( ) (Eqn. 10), not the activation of those neurons, and so these are not affected by the states that are shared across heads.(a) SST-5. (b) SEND.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Word cloud created based on averaged accumulated attention weights assigned to words in the vocabularies of Test sets.",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "Scatterplot shows scores on y-axis derived from Eqn. 12 (LAT attention scores A in red circles and gradient scores G in blue triangles) and corresponding emotional valence ratings, Val , from the Warriner et al. (2013) lexicon on x-axis. Shared vocabulary size is 2335 for SST-5, and 660 for SEND.",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "Heatmap of proportions of attention paid to words with selected semantics tags. The leftmost column \"rand\" shows the proportions if attention weights are uniformly distributed at chance.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"text": "",
"num": null,
"html": null
}
}
}
}