ACL-OCL / Base_JSON /prefixD /json /deelio /2021.deelio-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:21:55.185889Z"
},
"title": "Attention vs non-attention for a Shapley-based explanation method",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kersten",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Hugh",
"middle": [
"Mee"
],
"last": "Wong",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jaap",
"middle": [],
"last": "Jumelet",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The field of explainable AI has recently seen an explosion in the number of explanation methods for highly non-linear deep neural networks. The extent to which such methods-that are often proposed and tested in the domain of computer vision-are appropriate to address the explainability challenges in NLP is yet relatively unexplored. In this work, we consider Contextual Decomposition (CD)-a Shapley-based input feature attribution method that has been shown to work well for recurrent NLP models-and we test the extent to which it is useful for models that contain attention operations. To this end, we extend CD to cover the operations necessary for attention-based models. We then compare how long distance subject-verb relationships are processed by models with and without attention, considering a number of different syntactic structures in two different languages: English and Dutch. Our experiments confirm that CD can successfully be applied for attentionbased models as well, providing an alternative Shapley-based attribution method for modern neural networks. In particular, using CD, we show that the English and Dutch models demonstrate similar processing behaviour, but that under the hood there are consistent differences between our attention and non-attention models.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The field of explainable AI has recently seen an explosion in the number of explanation methods for highly non-linear deep neural networks. The extent to which such methods-that are often proposed and tested in the domain of computer vision-are appropriate to address the explainability challenges in NLP is yet relatively unexplored. In this work, we consider Contextual Decomposition (CD)-a Shapley-based input feature attribution method that has been shown to work well for recurrent NLP models-and we test the extent to which it is useful for models that contain attention operations. To this end, we extend CD to cover the operations necessary for attention-based models. We then compare how long distance subject-verb relationships are processed by models with and without attention, considering a number of different syntactic structures in two different languages: English and Dutch. Our experiments confirm that CD can successfully be applied for attentionbased models as well, providing an alternative Shapley-based attribution method for modern neural networks. In particular, using CD, we show that the English and Dutch models demonstrate similar processing behaviour, but that under the hood there are consistent differences between our attention and non-attention models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Machine learning models using deep neural architectures have seen tremendous performance improvements over the last few years. The advent of models such as LSTMs (Hochreiter and Schmidhuber, 1997) and, more recently, attention-based models such as Transformers (Vaswani et al., 2017) have allowed some language technologies to reach near human levels of performance. However, this performance has come at the cost of the interpretability of these models: high levels of nonlinearity make it a near impossible task for a human to comprehend how these models operate.",
"cite_spans": [
{
"start": 162,
"end": 196,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF8"
},
{
"start": 261,
"end": 283,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Understanding how non-interpretable black box models make their predictions has become an active area of research in recent years Jumelet and Hupkes, 2018; Samek et al., 2019; Linzen et al., 2019; Tenney et al., 2019; Ettinger, 2020, i.a.) . One popular interpretability approach makes use of feature attribution methods, that explain a model prediction in terms of the contributions of the input features. For instance, a feature attribution method for a sentiment analysis task can tell the modeller how much each of the input words contributed to the decision of a particular sentence.",
"cite_spans": [
{
"start": 130,
"end": 155,
"text": "Jumelet and Hupkes, 2018;",
"ref_id": "BIBREF13"
},
{
"start": 156,
"end": 175,
"text": "Samek et al., 2019;",
"ref_id": "BIBREF27"
},
{
"start": 176,
"end": 196,
"text": "Linzen et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 197,
"end": 217,
"text": "Tenney et al., 2019;",
"ref_id": "BIBREF32"
},
{
"start": 218,
"end": 239,
"text": "Ettinger, 2020, i.a.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multiple methods of assigning contributions to the input feature approaches exist. Some are based on local model approximations (Ribeiro et al., 2016) , others on gradient-based information (Simonyan et al., 2014; Sundararajan et al., 2017 ) and yet others consider perturbation-based methods (Lundberg and Lee, 2017) that leverage concepts from game theory such as Shapley values (Shapley, 1953) . Out of these approaches the Shapley-based attribution methods are computationally the most expensive, but they are better able at explaining more complex model dynamics involving feature interactions. This makes these methods well-suited for explaining the behaviour of current NLP models on a more linguistic level.",
"cite_spans": [
{
"start": 128,
"end": 150,
"text": "(Ribeiro et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 190,
"end": 213,
"text": "(Simonyan et al., 2014;",
"ref_id": "BIBREF30"
},
{
"start": 214,
"end": 239,
"text": "Sundararajan et al., 2017",
"ref_id": "BIBREF31"
},
{
"start": 381,
"end": 396,
"text": "(Shapley, 1953)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we therefore focus our efforts on that last category of attribution methods, focusing in particular on a method known as Contextual Decomposition (CD, Murdoch et al., 2018) , which provides a polynomial approach towards approximating Shapley values. This method has been shown to work well on recurrent models without attention (Jumelet et al., 2019; Saphra and Lopez, 2020) , but has not yet been used to provide insights into the linguistic capacities of attentionbased models. Here, to investigate the extent to which this method is also applicable for attention based models, we extend the method to include the operations required to deal with attention-based models and we compare two different recurrent models: a multi-layered LSTM model (similar to Jumelet et al., 2019) , and a Single Headed Attention RNN (SHA-RNN, Merity, 2019) . We focus on the task of language modelling and aim to discover simultaneously whether attribution methods like CD are applicable when attention is used, as well as how the attention mechanism influence the resulting feature attributions, focusing in particular on whether these attributions are in line with human intuitions. Following, i.a. Jumelet et al. (2019) , Lakretz et al. (2019) and Giulianelli et al. (2018) , we focus on how the models process long-distance subject verb relationships across a number of different syntactic constructions. To broaden our scope, we include two different languages: English and Dutch.",
"cite_spans": [
{
"start": 160,
"end": 186,
"text": "(CD, Murdoch et al., 2018)",
"ref_id": null
},
{
"start": 342,
"end": 364,
"text": "(Jumelet et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 365,
"end": 388,
"text": "Saphra and Lopez, 2020)",
"ref_id": "BIBREF28"
},
{
"start": 772,
"end": 793,
"text": "Jumelet et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 840,
"end": 853,
"text": "Merity, 2019)",
"ref_id": "BIBREF20"
},
{
"start": 1198,
"end": 1219,
"text": "Jumelet et al. (2019)",
"ref_id": "BIBREF14"
},
{
"start": 1222,
"end": 1243,
"text": "Lakretz et al. (2019)",
"ref_id": "BIBREF16"
},
{
"start": 1248,
"end": 1273,
"text": "Giulianelli et al. (2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Through our experiments we find that, while both English and Dutch language models produce similar results, our attention and non-attention models behave differently. These differences manifest in incorrect attributions for the subjects in sentences with a plural subject-verb pair, where we find that a higher attribution is given to a plural subject when a singular verb is used compared to a singular subject.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions to the field thus lie in two dimensions: on the one hand, we compare attention and non-attention models with regards to their explainability. On the other hand, we perform our analysis in two languages, namely Dutch and English, to see if patterns hold in different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section we first discuss the model architectures that we consider. Following this, we explain the attribution method that we use to explain the different models. Finally, we consider the task which we use to extract explanations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "To examine the differences between attention and non-attention models, we look at one instance of each kind of model. For the attention model, we consider the Single Headed Attention RNN (SHA-RNN, Merity, 2019) , and for our non-attention model a multi-layered LSTM (Gulordava et al., 2018) . Since both models use an LSTM at their core, we hope to capture and isolate the influence of the attention mechanism on the behaviour of the model. Using a Transformer architecture instead would have made this comparison far more challenging, given that these kinds of models differ in multiple significant aspects from LSTMs with regards to their processing mechanism. Below, we give a brief overview of the SHA-RNN architecture.",
"cite_spans": [
{
"start": 197,
"end": 210,
"text": "Merity, 2019)",
"ref_id": "BIBREF20"
},
{
"start": 266,
"end": 290,
"text": "(Gulordava et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model architectures",
"sec_num": "2.1"
},
{
"text": "The attention model we consider is the Single Headed Attention RNN, or SHA-RNN, proposed by Merity (2019) . The SHA-RNN was designed to be a reasonable alternative to the comparatively much larger Transformer models. Merity argues that while larger models can bring better performance, this often comes at the cost of training and inference time. As such, the author proposed this smaller model, which achieves results comparable to earlier Transformer models, without hyperparameter tuning.",
"cite_spans": [
{
"start": 92,
"end": 105,
"text": "Merity (2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SHA-RNN",
"sec_num": null
},
{
"text": "The SHA-RNN consists of a block structure with three modules: an LSTM, a pointer-based attention layer and a feed-forward Boom layer (we provide a graphical overview in Figure 1 ). These blocks can be stacked to create a similar setup to that of an encoder Transformer. Layer normalisation is applied at several points in the model. The attention layer in the SHA-RNN uses only a single attention head, creating a similar mechanism to Grave et al. (2017) and Merity et al. (2017) . This is in contrast to most other Transformer (and thus attention) models, which utilise multiple attention heads. However, recent work, like Michel et al. (2019) , has shown that using only a single attention head may in some cases provide similar performance to a multi-headed approach, while significantly reducing the computational cost. Importantly, when using multiple blocks of the SHA-RNN, the attention layer is only applied in the second to last block.",
"cite_spans": [
{
"start": 435,
"end": 454,
"text": "Grave et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 459,
"end": 479,
"text": "Merity et al. (2017)",
"ref_id": "BIBREF21"
},
{
"start": 624,
"end": 644,
"text": "Michel et al. (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 169,
"end": 177,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "SHA-RNN",
"sec_num": null
},
{
"text": "The Boom layer represents the feed-forward layers commonly found in Transformer models (Vaswani et al., 2017) . In his work, Merity uses a single feed-forward layer with a GELU activation (Hendrycks and Gimpel, 2016) , followed by summation over the output to reduce the dimension of the resulting vector to that before applying the feed-forward layer.",
"cite_spans": [
{
"start": 87,
"end": 109,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF33"
},
{
"start": 188,
"end": 216,
"text": "(Hendrycks and Gimpel, 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SHA-RNN",
"sec_num": null
},
{
"text": "The interpretability method that we use and extend in this paper is Contextual Decomposition (CD Murdoch et al., 2018), a feature attribution method for explaining individual predictions made by an LSTM. CD decomposes the output into a sum of two contribution types \u03b2 + \u03b3: one part resulting from a specific \"relevant\" token or phrase (\u03b2), and one part resulting from all other input to the model (\u03b3), which is said to be \"irrelevant\". The token or phrase of interest is provided as an additional parameter to the model. CD performs a modified forward pass through the model for each individual token in the input sentence. The \u03b2 + \u03b3 decomposition is achieved by splitting up the hidden and cell state of the LSTM into two parts as well:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition",
"sec_num": "2.2"
},
{
"text": "h t = \u03b2 t + \u03b3 t (1) c t = \u03b2 c t + \u03b3 c t (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition",
"sec_num": "2.2"
},
{
"text": "This decomposition is constructed such that \u03b2 corresponds to contributions made solely by elements in the relevant phrase, while \u03b3 represents all other contributions. Fundamental to CD is the role of interactions between \u03b2 and \u03b3 terms that arrive from operations such as (point-wise) multiplications. CD resolves this by \"factorizing\" the outcome of a non-linear activation function into a sum of components, based on an approximation of the Shapley value of the activation function (Shapley, 1953) .",
"cite_spans": [
{
"start": 483,
"end": 498,
"text": "(Shapley, 1953)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition",
"sec_num": "2.2"
},
{
"text": "For example, the forget gate update of the cell state in an LSTM is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition",
"sec_num": "2.2"
},
{
"text": "c t = c t\u22121 \u03c3(W f x t + V f h t\u22121 + b f ) (3) where W f \u2208 R dx\u00d7d h , V f \u2208 R d h \u00d7d h and b f \u2208 R d h .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition",
"sec_num": "2.2"
},
{
"text": "CD decomposes both c t\u22121 and h t\u22121 into a sum of \u03b2 and \u03b3 terms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition",
"sec_num": "2.2"
},
{
"text": "c t = (\u03b2 c t\u22121 + \u03b3 c t\u22121 ) \u03c3(W f x t + V f (\u03b2 t\u22121 + \u03b3 t\u22121 ) + b f ) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition",
"sec_num": "2.2"
},
{
"text": "The forget gate is then decomposed into a sum of four components (x, \u03b2, \u03b3 & b f ), based on their Shapley values, which leads to a cross product between the terms in the decomposed cell state, and the decomposed forget gate. The \u03b2 + \u03b3 decomposition of the new cell state c t is formed by determining which specific interactions between \u03b2 and \u03b3 components should be assigned to the new \u03b2 c t and \u03b3 c t terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition",
"sec_num": "2.2"
},
{
"text": "In this work, we consider the generalisation of the CD method proposed by Jumelet et al. 2019, namely Generalized Contextual Decomposition (GCD). They alter the way that \u03b2 and \u03b3 interactions are divided over these terms. As such, this method provides a more complete picture of the interactions within the model. For a more detailed explanation of the procedure we refer to the original papers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition",
"sec_num": "2.2"
},
{
"text": "To test our models, we consider the Number Agreement (NA) task, a linguistic task that has stood central in various works in the interpretability literature (Lakretz et al., 2019; Linzen et al., 2016; Gulordava et al., 2018; Wolf, 2019; Goldberg, 2019) . In this task, a model is evaluated by how well it is able to track the subject-verb relations over long distances, as assessed by the percentage of cases in which the model is able to match the form of the verb to the number of the subject. The challenge in the NA task lies in the presence of one or more attractor nouns between the subject and the verb that competes with the subject. For instance in the sentence \"The boys at the car greet\", \"car\" forms the attractor noun, and is a different number than the boys, thereby possibly confusing the model to predict a singular verb, \"greets\".",
"cite_spans": [
{
"start": 157,
"end": 179,
"text": "(Lakretz et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 180,
"end": 200,
"text": "Linzen et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 201,
"end": 224,
"text": "Gulordava et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 225,
"end": 236,
"text": "Wolf, 2019;",
"ref_id": "BIBREF34"
},
{
"start": 237,
"end": 252,
"text": "Goldberg, 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Number Agreement Task",
"sec_num": "2.3"
},
{
"text": "Several earlier studies preceded us in considering number agreement as a means to investigate language models. Linzen et al. laid the groundwork for this task, using it to assess the ability of LSTMs to learn syntax-sensitive dependencies. In their work, they only considered the English language. Gulordava et al. (2018) extended the task to the Italian, Hebrew and Russian languages. Moreover, they provided a more in-depth study of the Italian model, comparing it to human subjects. Lakretz et al. (2019) provided a detailed look at the underlying mechanisms of LSTMs by which they are able to model grammatical structure. To this end, they performed an ablation study and discovered which units were mainly responsible for this mechanism. Finally, further research into the Italian version of the NA task in Lakretz et al. (2020) investigated how emergent mechanisms in language models relate to linguistic processing in humans.",
"cite_spans": [
{
"start": 298,
"end": 321,
"text": "Gulordava et al. (2018)",
"ref_id": "BIBREF5"
},
{
"start": 486,
"end": 507,
"text": "Lakretz et al. (2019)",
"ref_id": "BIBREF16"
},
{
"start": 812,
"end": 833,
"text": "Lakretz et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "132",
"sec_num": null
},
{
"text": "Number agreement has also been explored before in the context of attribution methods. Due to the clear dependency between a subject and a verb, it is a useful task to evaluate whether a model based its prediction of the verb on the number information of the subject. Poerner et al. 2018provide a large suite of evaluation tasks for attribution methods including number agreement, and show that attribution methods can sometimes yield unexpected contribution patterns. Jumelet et al. (2019) employ Contextual Decomposition to investigate the behaviour of an LSTM LM on a number agreement task, and demonstrate that their model employs a default reasoning heuristic when resolving the task, with a strong bias for singular verbs. Hao (2020) investigates an attribution method on a range of number agreement constructions containing relative clauses, showing that LMs possess a robust notion of number information.",
"cite_spans": [
{
"start": 468,
"end": 489,
"text": "Jumelet et al. (2019)",
"ref_id": "BIBREF14"
},
{
"start": 728,
"end": 738,
"text": "Hao (2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "132",
"sec_num": null
},
{
"text": "In this section, we first look at extending Contextual Decomposition for the SHA-RNN. Following this, we outline the models which we will use for our experiments. Finally, we explain how we extended the Number Agreement task and how we applied Contextual Decomposition to the NA task, forming the Subject Attribution task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "The original Contextual Decomposition paper (Murdoch et al., 2018) only defines the decomposition for an LSTM model. The SHA-RNN also contains several operations that have not previously been covered by these two papers. As such, we have defined the decompositions for the following two operations: Layer Normalization (Ba et al., 2016) and the Softmax operation in the Single Headed Attention layer (Merity, 2019) . Based on these new decompositions, we leverage the implementation of Contextual Decomposition in the diagNNose library of Jumelet (2020) to also cover our SHA-RNN.",
"cite_spans": [
{
"start": 400,
"end": 414,
"text": "(Merity, 2019)",
"ref_id": "BIBREF20"
},
{
"start": 539,
"end": 553,
"text": "Jumelet (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition for the SHA-RNN",
"sec_num": "3.1"
},
{
"text": "Layer Normalization Layer Normalization estimates the normalization statistics over the summed inputs to the neurons in a hidden layer. A definition of the Layer Normalization operation can be found in Eq. 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition for the SHA-RNN",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00b5 = 1 n n i=1 a i , \u03c3 = 1 n n i=1 (a i \u2212 \u00b5) 2 , LN(a) = \u03b1 a \u2212 \u00b5 \u03c3 + \u03b4,",
"eq_num": "(5)"
}
],
"section": "Contextual Decomposition for the SHA-RNN",
"sec_num": "3.1"
},
{
"text": "where a represents the inputs to the hidden layer, n the number of hidden units and \u03b1 and \u03b4 are learnable parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition for the SHA-RNN",
"sec_num": "3.1"
},
{
"text": "Because it looks at all inputs in a layer, both \u03b2 and \u03b3 might interact within this layer. As such, we must define how we handle the decomposition of this operation, which we show in Eq. (6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition for the SHA-RNN",
"sec_num": "3.1"
},
{
"text": "\u03b2 l+1 = LN(\u03b2 l ) \u2212 \u03b4, \u03b3 l+1 = LN(\u03b2 l + \u03b3 l ) \u2212 LN(\u03b2 l ) + \u03b4 LN(a) = LN(\u03b2 l + \u03b3 l ) = \u03b2 l+1 + \u03b3 l+1 (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition for the SHA-RNN",
"sec_num": "3.1"
},
{
"text": "Our decomposition strictly separates the \u03b3 contributions from the \u03b2 contributions, which means that no information from \u03b3 may be captured in \u03b2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition for the SHA-RNN",
"sec_num": "3.1"
},
{
"text": "Softmax Similar to our treatment of the Layer Normalization operation, we strictly separate \u03b3 from the \u03b2 components, as can be observed in Eq. 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition for the SHA-RNN",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b2 l+1 = Softmax(\u03b2 l ), \u03b3 l+1 = Softmax(\u03b2 l + \u03b3 l ) \u2212 \u03b2 l+1",
"eq_num": "(7)"
}
],
"section": "Contextual Decomposition for the SHA-RNN",
"sec_num": "3.1"
},
{
"text": "3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Decomposition for the SHA-RNN",
"sec_num": "3.1"
},
{
"text": "For our experiments we consider two types of models: the attention SHA-RNN model and the nonattention LSTM model. Below, we will outline the specific architectures used and training hyperparameters chosen to build and train these models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "2"
},
{
"text": "LSTM model The LSTM model we use is similar to the one used by Gulordava et al. (2018) . The model is a stacked two layer LSTM, each with 650 hidden units. Word embeddings are trained alongside the model and the weights of the embedding layer are tied to the decoder layer (Inan et al., 2017) .",
"cite_spans": [
{
"start": 63,
"end": 86,
"text": "Gulordava et al. (2018)",
"ref_id": "BIBREF5"
},
{
"start": 273,
"end": 292,
"text": "(Inan et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architectures",
"sec_num": "3.2.1"
},
{
"text": "SHA-RNN model For our SHA-RNN we use two blocks (see Fig. 1 ), each with an LSTM with 650 hidden units. Furthermore, our model also utilises a trained word embedding layer with tied weights, similar to our non-attention model. Finally, our Boom layer does not increase our dimension size, but keeps it at 650. This means our Boom layer reduces to a feed-forward layer with GELU activations.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 59,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Architectures",
"sec_num": "3.2.1"
},
{
"text": "We trained four models to conduct our experiments on. For both the attention (SHA-RNN) and nonattention (LSTM) model architectures, a model was trained on a Dutch and English corpus. Both corpora are based on wikipedia text. Following Gulordava et al. 2018, only the 50.000 most common words were retained in the vocabulary for both corpora, replacing all other words with <unk> tokens. The corpora were split into a training, validation and test set. The training of the models is split up in two phases: first, the model is trained for thirty epochs with a learning rate of 0.02 and a batch size of 64. Then, we fine-tune the model for an additional five epochs with the learning rate halved to 0.01 and a batch size of 16. During training, we set dropout to 0.1. We use the LAMB optimizer (You et al., 2019) following Merity (2019).",
"cite_spans": [
{
"start": 792,
"end": 810,
"text": "(You et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.2.2"
},
{
"text": "In this work, we extend the Number Agreement (NA) task to the Dutch language. We do so by applying the same procedure that was used in Lakretz et al. (2019) , namely by creating a synthetic dataset. This is different from the works of Linzen et al. (2016) and Gulordava et al. (2018) , which derived their sentences directly from corpora.",
"cite_spans": [
{
"start": 135,
"end": 156,
"text": "Lakretz et al. (2019)",
"ref_id": "BIBREF16"
},
{
"start": 235,
"end": 255,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF18"
},
{
"start": 260,
"end": 283,
"text": "Gulordava et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extending Number Agreement",
"sec_num": "3.3"
},
{
"text": "Our version of the NA task contains a total of five different templates. First of all, we use a simple template called Simple in which the verb immediately follows the subject. We then extend this by adding a prepositional phrase which modifies the subject between the subject and the verb, either by having a prepositional phrase containing a noun (NounPP) or containing a proper noun (NamePP). We then have the sentence conjunction (SConj) task, which consists of two Simple templates separated by a conjunction. The challenge of the SConj task is correctly predicting the number of the verb in the second sentence. Finally, we have the ThatNounPP template, which contains a declarative content clause which incorporates a second subject-verb dependency with a noun modifying prepositional phrase in its that-clause. An overview of the templates including example sentences can be found in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 892,
"end": 899,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Extending Number Agreement",
"sec_num": "3.3"
},
{
"text": "We create our final NA-task by obtaining frequent words from our corpus to populate these sentence templates. This process is done for both the Dutch and the English corpora, such that we can more easily compare the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending Number Agreement",
"sec_num": "3.3"
},
{
"text": "We propose a new task for input feature attribution methods based on the Number Agreement task: Subject Attribution. The goal of the task to produce explanations in such a way that congruent subjectverb relations gain higher attributions than noncongruent ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subject Attribution Task",
"sec_num": "3.4"
},
{
"text": "In context of the NA task this means that we compare the attribution scores of the subject of the sentence in the case where it is and is not congruent with the number of the verb. In our evaluation we consider a higher attribution for the congruent noun compared to the non-congruent noun to be correct, as this would be in line with human intuition. A schematic overview of this task can be found in Fig. 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 402,
"end": 408,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Subject Attribution Task",
"sec_num": "3.4"
},
{
"text": "In this work, we use the task in the following way: we apply our attribution method on each sentence within our dataset, generating input feature attributions. We then compare the subject attributions of these sentences to find in which percentage of the sentences the attributions for the subject were higher for the congruent verb than the non congruent one. The boy thinks that the mothers at the car miss Figure 2 : Schematic overview of the default number agreement task that compares the output probabilities of the LM, and the subject attribution task that compares the attribution scores of the subject to the correct and incorrect form of the verb. We hypothesise that for a model with a sophisticated understanding of number agreement, the subject's contribution to the correct verb form is greater than to the incorrect form.",
"cite_spans": [],
"ref_spans": [
{
"start": 409,
"end": 417,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Subject Attribution Task",
"sec_num": "3.4"
},
{
"text": "In our work, we have considered several experiments. Firstly, we evaluate the ability of our models to handle the data itself by comparing the model perplexities. Following this, we look at the Number Agreement and Subject Attribution tasks to evaluate the differences between our models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and analysis",
"sec_num": "4"
},
{
"text": "To establish the adequacy of our models on the data, we calculate the perplexity for each model over the held-out test set (Table 2 ). Due to the different data sets used for the two languages, direct comparisons between the perplexity scores for the English and Dutch models are not feasible. We do observe that for both languages, the SHA-RNN yields a perplexity score that is 5% lower than the score of the LSTM counterpart. ",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 131,
"text": "(Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Model Perplexities",
"sec_num": "4.1"
},
{
"text": "To assess the performance of the different language models, we consider the different sentence structures presented in Table 1 . For each sentence structure, we evaluate the predictive performance of the model on matching the form of the verb to the number of the relevant subject. For example, given a singular subject, we evaluate p(VERB S |SUBJ S ) > p(VERB P |SUBJ S ). The same sentence templates have been used for the Subject Attribution task. We apply Contextual Decomposi-tion to the sentences to investigate the behavioural differences between the models. We examine the results of our experiments along two axes: language and attention. First, we compare the Dutch and English language models. Following this, we analyse the differences between the attention and non-attention models.",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Number Agreement",
"sec_num": "4.2"
},
{
"text": "Across the board, the Dutch models perform slightly better on the NA tasks than the English models. This could be due to the data sets used, as the Dutch data set was larger than the English one, giving the Dutch model more opportunities to learn. We do find similar patterns between the Dutch models (Table 3a ) and the English models (Table 3b) : between the two languages, the models generally share the tasks and conditions that they perform well on. There are exceptions to this, as in the case of the Simple NA task for the LSTM, with Dutch models performing better on the singular condition while their English counterparts achieve higher scores on the plural condition.",
"cite_spans": [],
"ref_spans": [
{
"start": 301,
"end": 310,
"text": "(Table 3a",
"ref_id": "TABREF5"
},
{
"start": 336,
"end": 346,
"text": "(Table 3b)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Language axis",
"sec_num": "4.2.1"
},
{
"text": "When we compare the results of the models on the Subject Attribution task in Tables 3a and 3b, we find more substantial differences between the models across the languages. In case of the English models, the SHA-RNN performed rather poorly on the plural conditions of the Subject Attribution task. This is remarkable, given that the Dutch SHA-RNN yields significantly higher scores on these conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language axis",
"sec_num": "4.2.1"
},
{
"text": "We observe that for the English SHA-RNN, contextual decomposition consistently yields attribution scores that are lower for the plural conditions than those for the singular conditions (see Fig. 3 for an example). In the Dutch SHA-RNN, this behaviour is only apparent for the Simple, NounPP and NamePP tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 196,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language axis",
"sec_num": "4.2.1"
},
{
"text": "Jumelet et al. (2019) encountered similar behaviours when applying CD to an LSTM language model. They attributed the lower attributions to a bias towards singular verbs in the model, which resulted in a form of default reasoning. However, our accuracy results do not indicate a similar bias, as we found all our models performing well on both plural and singular subjects. This raises the question as to what is causing this behaviour, which we leave for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language axis",
"sec_num": "4.2.1"
},
{
"text": "Overall, these results do not demonstrate any significant differences between the Dutch and English models. While we have shown that differences occur across conditions, we find that for most conditions, both models behave similarly, with the two LSTM models displaying more similarities than the SHA-RNN models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language axis",
"sec_num": "4.2.1"
},
{
"text": "To compare the attention models (SHA-RNNs) to the non-attention model (the LSTMs), we again first consider the accuracy scores in Tables 3a and 3b . A comparison between the SHA-RNN and the LSTM shows that the SHA-RNN performs slightly worse than the LSTM by a small margin. There are some cases where this difference is more pronounced, such as for the English ThatNounPP task (see Table 3b ), where we observe large differences for the singular subject conditions. This behaviour goes against the perplexity results in Table 2, which indicate a better performing SHA-RNN. This is in line with the results found by Nikoulina et al. (2021) , who demonstrate that perplexity is not always directly correlated to performance on downstream tasks, as appears to be the case for our Number Agreement task.",
"cite_spans": [
{
"start": 617,
"end": 640,
"text": "Nikoulina et al. (2021)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 130,
"end": 147,
"text": "Tables 3a and 3b",
"ref_id": "TABREF5"
},
{
"start": 384,
"end": 392,
"text": "Table 3b",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Attention axis",
"sec_num": "4.2.2"
},
{
"text": "Looking at the model explanations in Tables 3a and 3b we see that across the board the LSTM performs better on the Subject Attribution task. We find that both SHA-RNN models generally do not produce the expected attributions for the plural subject conditions, while there are very few instances of the LSTM performing under 50%, only failing by a large margin for the English LSTM on the Simple P and NamePP P conditions (see Table 3a ).",
"cite_spans": [],
"ref_spans": [
{
"start": 426,
"end": 434,
"text": "Table 3a",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Attention axis",
"sec_num": "4.2.2"
},
{
"text": "From our observations, the attention and nonattention models behave differently both in terms of accuracy scores on the NA task and the explanations from the Subject Attribution task. We find that the difference between the architectures of the SHA-RNN and the LSTM leads to significant variations in general performance as well as behavioural patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention axis",
"sec_num": "4.2.2"
},
{
"text": "In this paper, we compared both attention (SHA-RNN) and non-attention (LSTM) language models across two languages, namely Dutch and English. To test these models, we extended the Number Agreement task from Lakretz et al. (2019) to the Dutch language, which allows us to compare these models across both languages. In addition to this, we extended a feature attribution method called Contextual Decomposition (Murdoch et al., 2018) to the SHA-RNN model. We applied Contextual : Overview of prediction accuracy scores (the numbers outside the brackets) and subject attribution behaviour (in brackets) on the Number Agreement tasks for the Dutch and English language models. For each task, the noun inflections are given in the condition column, with S indicating singular and P indicating plural. The underlined letter in the condition indicates the noun belonging to the verb that is predicted. The numbers in brackets denote the performance on the subject attribution task: the percentage of cases in which the attributions of the subjects were higher to the congruent verb than to the non-congruent ones. The colour coding of the table cells follows the performance on this subject attribution task along a colour gradient from green (high performance) to red (low performance).",
"cite_spans": [
{
"start": 206,
"end": 227,
"text": "Lakretz et al. (2019)",
"ref_id": "BIBREF16"
},
{
"start": 408,
"end": 430,
"text": "(Murdoch et al., 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Decomposition to the Number Agreement task to obtain interpretable explanations and compared the different models from a feature attribution standpoint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We found that both the Dutch and English models behaved similarly in terms of accuracy. While general performance differed between the two languages, we did find that similar behavioural patterns emerged from the models. This partially held for the explanations obtained through Contextual Decomposition, where we did uncover differences. These differences were centred around the SHA-RNN, which we found behaved as if it applied default reasoning similar to the work of Jumelet et al. (2019) .",
"cite_spans": [
{
"start": 471,
"end": 492,
"text": "Jumelet et al. (2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Comparing our attention and non-attention models, we found immediate differences, both when comparing the performance on the Number Agreement task as when looking into the attributions. Both models performed differently on the same tasks and feature attributions varied between them. We found that our LSTM performed better on the attribution task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our current results suggest that attention and non-attention models behave differently according to Contextual Decomposition. More specifically, we find that the attention models have more difficulty producing correct attributions for plural sentences. A logical next step would then be to compare our current results by those obtained through different attribution methods such as SHAP (Lundberg and Lee, 2017) and Integrated Gradients (Sundararajan et al., 2017). Should we find that Contextual Decomposition holds up well to these other Figure 3 : Contextual Decomposition attributions for the English models (SHA-RNN and LSTM) on the SP and PS conditions of the NounPP task. Fig. 3a shows the attributions of two individial sentences, while Figs. 3b and 3c show aggregated attributions over all sentences of that condition. Note that in Fig. 3b the attribution for the subject under the singular verb is both higher in the SP condition as well as in PS condition, while in Fig. 3c the attribution is higher for the subject matching the verb form.",
"cite_spans": [],
"ref_spans": [
{
"start": 540,
"end": 548,
"text": "Figure 3",
"ref_id": null
},
{
"start": 679,
"end": 686,
"text": "Fig. 3a",
"ref_id": null
},
{
"start": 841,
"end": 848,
"text": "Fig. 3b",
"ref_id": null
},
{
"start": 977,
"end": 984,
"text": "Fig. 3c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "methods, it could then prove to be a valuable method for approximating Shapley values in polynomial time. Moreover, it is worth looking into the application of Contextual Decomposition in Transformer architectures, which rely more heavily on these kinds of attention mechanisms. An alternative line of research that we would like to explore is the attention mechanism itself. Even though it has been shown that attention does not provide guarantees for explainability (Jain and Wallace, 2019) , it would still be worthwhile to investigate the attention patterns that are employed by the SHA-RNN.",
"cite_spans": [
{
"start": 468,
"end": 492,
"text": "(Jain and Wallace, 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "What bert is not: Lessons from a new suite of psycholinguistic diagnostics for language models",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "34--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allyson Ettinger. 2020. What bert is not: Lessons from a new suite of psycholinguistic diagnostics for lan- guage models. Transactions of the Association for Computational Linguistics, 8:34-48.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information",
"authors": [
{
"first": "Mario",
"middle": [],
"last": "Giulianelli",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Harding",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Mohnert",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "240--248",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5426"
]
},
"num": null,
"urls": [],
"raw_text": "Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Un- der the hood: Using diagnostic classifiers to in- vestigate and improve how language models track agreement information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and In- terpreting Neural Networks for NLP, pages 240-248, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Assessing BERT's Syntactic Abilities",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg. 2019. Assessing BERT's Syntactic Abilities. page 4.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving neural language models with a continuous cache",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edouard Grave, Armand Joulin, and Nicolas Usunier. 2017. Improving neural language models with a continuous cache. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Pro- ceedings. OpenReview.net.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Colorless green recurrent networks dream hierarchically",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Gulordava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018",
"volume": "1",
"issue": "",
"pages": "1195--1205",
"other_ids": {
"DOI": [
"10.18653/v1/n18-1108"
]
},
"num": null,
"urls": [],
"raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1195-1205. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Attribution analysis of grammatical dependencies in lstms",
"authors": [
{
"first": "Yiding",
"middle": [],
"last": "Hao",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiding Hao. 2020. Attribution analysis of grammatical dependencies in lstms. CoRR, abs/2005.00062.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bridging nonlinearities and stochastic regularizers with gaussian error linear units",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaus- sian error linear units. CoRR, abs/1606.08415.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Long Short-Term Memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure",
"authors": [
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Veldhoen",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Artificial Intelligence Research",
"volume": "61",
"issue": "",
"pages": "907--926",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and 'diagnostic classifiers' re- veal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Tying word vectors and word classifiers: A loss framework for language modeling",
"authors": [
{
"first": "Khashayar",
"middle": [],
"last": "Hakan Inan",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Khosravi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hakan Inan, Khashayar Khosravi, and Richard Socher. 2017. Tying word vectors and word classifiers: A loss framework for language modeling. In 5th Inter- national Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Con- ference Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Attention is not explanation",
"authors": [
{
"first": "Sarthak",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Byron",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3543--3556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3543-3556.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "diagnnose: A library for neural activation analysis",
"authors": [],
"year": null,
"venue": "Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "342--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaap Jumelet. 2020. diagnnose: A library for neu- ral activation analysis. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpret- ing Neural Networks for NLP, pages 342-350.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Do language models understand anything? on the ability of LSTMs to understand negative polarity items",
"authors": [
{
"first": "Jaap",
"middle": [],
"last": "Jumelet",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "222--231",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5424"
]
},
"num": null,
"urls": [],
"raw_text": "Jaap Jumelet and Dieuwke Hupkes. 2018. Do lan- guage models understand anything? on the ability of LSTMs to understand negative polarity items. In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 222-231, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Analysing neural language models: Contextual decomposition reveals default reasoning in number and gender assignment",
"authors": [
{
"first": "Jaap",
"middle": [],
"last": "Jumelet",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1001"
]
},
"num": null,
"urls": [],
"raw_text": "Jaap Jumelet, Willem Zuidema, and Dieuwke Hupkes. 2019. Analysing neural language models: Con- textual decomposition reveals default reasoning in number and gender assignment. In Proceedings of the 23rd Conference on Computational Natural Lan- guage Learning (CoNLL), pages 1-11, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Exploring processing of nested dependencies in neural-network language models and humans. CoRR, abs",
"authors": [
{
"first": "Yair",
"middle": [],
"last": "Lakretz",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Alessandra",
"middle": [],
"last": "Vergallito",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Stanislas",
"middle": [],
"last": "Dehaene",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yair Lakretz, Dieuwke Hupkes, Alessandra Vergallito, Marco Marelli, Marco Baroni, and Stanislas De- haene. 2020. Exploring processing of nested depen- dencies in neural-network language models and hu- mans. CoRR, abs/2006.11098.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The emergence of number and syntax units in LSTM language models",
"authors": [
{
"first": "Yair",
"middle": [],
"last": "Lakretz",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Theo",
"middle": [],
"last": "Desbordes",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Stanislas",
"middle": [],
"last": "Dehaene",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "11--20",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Yair Lakretz, German Kruszewski, Theo Desbordes, Dieuwke Hupkes, Stanislas Dehaene, and Marco Ba- roni. 2019. The emergence of number and syn- tax units in LSTM language models. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 11-20, Minneapolis, Minnesota. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Grzegorz Chrupa\u0142a, Yonatan Belinkov, and Dieuwke Hupkes, editors. 2019. Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, Florence, Italy.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Assessing the ability of lstms to learn syntaxsensitive dependencies",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Trans. Assoc. Comput. Linguistics",
"volume": "4",
"issue": "",
"pages": "521--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntax- sensitive dependencies. Trans. Assoc. Comput. Lin- guistics, 4:521-535.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A unified approach to interpreting model predictions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Su-In",
"middle": [],
"last": "Lundberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4765--4774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott M. Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Ad- vances in Neural Information Processing Systems 30: Annual Conference on Neural Information Pro- cessing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4765-4774.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Single headed attention RNN: stop thinking with your head. CoRR, abs",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
}
],
"year": 1911,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity. 2019. Single headed attention RNN: stop thinking with your head. CoRR, abs/1911.11423.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Pointer sentinel mixture models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture mod- els. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Open- Review.net.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Are sixteen heads really better than one?",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "14014--14024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In Ad- vances in Neural Information Processing Systems 32: Annual Conference on Neural Information Pro- cessing Systems 2019, NeurIPS 2019, December 8- 14, 2019, Vancouver, BC, Canada, pages 14014- 14024.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Beyond word importance: Contextual decomposition to extract interactions from lstms",
"authors": [
{
"first": "W",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Murdoch",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. James Murdoch, Peter J. Liu, and Bin Yu. 2018. Beyond word importance: Contextual decomposi- tion to extract interactions from lstms. In 6th Inter- national Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenRe- view.net.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The rediscovery hypothesis: Language models need to meet linguistics",
"authors": [
{
"first": "Vassilina",
"middle": [],
"last": "Nikoulina",
"suffix": ""
},
{
"first": "Maxat",
"middle": [],
"last": "Tezekbayev",
"suffix": ""
},
{
"first": "Nuradil",
"middle": [],
"last": "Kozhakhmet",
"suffix": ""
},
{
"first": "Madina",
"middle": [],
"last": "Babazhanova",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Gall\u00e9",
"suffix": ""
},
{
"first": "Zhenisbek",
"middle": [],
"last": "Assylbekov",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vassilina Nikoulina, Maxat Tezekbayev, Nuradil Kozhakhmet, Madina Babazhanova, Matthias Gall\u00e9, and Zhenisbek Assylbekov. 2021. The rediscovery hypothesis: Language models need to meet linguis- tics. CoRR, abs/2103.01819.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Evaluating neural network explanation methods using hybrid documents and morphosyntactic agreement",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Poerner",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "340--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nina Poerner, Hinrich Sch\u00fctze, and Benjamin Roth. 2018. Evaluating neural network explanation meth- ods using hybrid documents and morphosyntactic agreement. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 340-350.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "why should I trust you?\": Explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco T\u00falio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "97--101",
"other_ids": {
"DOI": [
"10.18653/v1/n16-3020"
]
},
"num": null,
"urls": [],
"raw_text": "Marco T\u00falio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"why should I trust you?\": Explain- ing the predictions of any classifier. In Proceedings of the Demonstrations Session, NAACL HLT 2016, The 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, San Diego Califor- nia, USA, June 12-17, 2016, pages 97-101. The As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Explainable AI: interpreting, explaining and visualizing deep learning",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Vedaldi",
"suffix": ""
},
{
"first": "Lars",
"middle": [
"Kai"
],
"last": "Hansen",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "11700",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wojciech Samek, Gr\u00e9goire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert M\u00fcller. 2019. Explainable AI: interpreting, explaining and visual- izing deep learning, volume 11700. Springer Na- ture.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Lstms compose-and learn-bottom-up",
"authors": [
{
"first": "Naomi",
"middle": [],
"last": "Saphra",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
"volume": "",
"issue": "",
"pages": "2797--2809",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naomi Saphra and Adam Lopez. 2020. Lstms com- pose-and learn-bottom-up. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2797-2809.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A value for n-person games",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lloyd",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shapley",
"suffix": ""
}
],
"year": 1953,
"venue": "Contributions to the Theory of Games",
"volume": "2",
"issue": "",
"pages": "307--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lloyd S Shapley. 1953. A value for n-person games. Contributions to the Theory of Games, 2(28):307- 317.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Vedaldi",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2014,
"venue": "2nd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan, Andrea Vedaldi, and Andrew Zisser- man. 2014. Deep inside convolutional networks: Vi- sualising image classification models and saliency maps. In 2nd International Conference on Learn- ing Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Workshop Track Proceedings.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Axiomatic attribution for deep networks",
"authors": [
{
"first": "Mukund",
"middle": [],
"last": "Sundararajan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Qiqi",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "3319--3328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Pro- ceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Aus- tralia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3319-3328. PMLR.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Bert rediscovers the classical nlp pipeline",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4593--4601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998-6008.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Some additional experiments extending the tech report \"Assessing BERT's Syntactic Abilities",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf. 2019. Some additional experiments ex- tending the tech report \"Assessing BERT's Syntactic Abilities\" by Yoav Goldberg. page 7.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "A schematic overview of a block in the SHA-RNN. A block in the SHA-RNN is composed of an LSTM, a single headed attention layer and a Boom feed-forward layer. Throughout the model, layer normalisation is used. Hidden states are passed between subsequent steps in the model. The memory state is concatenated with previous memory states, and passed on as well.",
"uris": null
},
"TABREF0": {
"num": null,
"text": "V dat/that DET N PREP DET N VDe jongen denkt dat de moeders bij de auto missen",
"html": null,
"content": "<table><tr><td/><td>Template</td><td>Example</td></tr><tr><td>Simple</td><td>DET N V</td><td>De jongen groet</td></tr><tr><td/><td/><td>The boy greets</td></tr><tr><td>NounPP</td><td>DET N PREP DET N V</td><td>De jongens bij de auto groeten</td></tr><tr><td/><td/><td>The boys at the car greet</td></tr><tr><td>NamePP</td><td>DET N PREP NAME V</td><td>De jongens bij Pat groeten</td></tr><tr><td/><td/><td>The boys at Pat greet</td></tr><tr><td>SConj</td><td>DET N V en/and DET N V</td><td>De jongen groet en de moeders missen</td></tr><tr><td/><td/><td>The boy greets and the mothers miss</td></tr><tr><td colspan=\"2\">ThatNounPP DET N</td><td/></tr></table>",
"type_str": "table"
},
"TABREF1": {
"num": null,
"text": "Overview of the templates for the NA-tasks. DET is a determiner, N a noun, NAME a name of a person, V a verb and PREP a preposition. The underlined noun in the template signifies the subject belonging to the relevant verb.",
"html": null,
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">NUMBER AGREEMENT TASK</td><td/><td/><td/></tr><tr><td/><td colspan=\"2\">P(approve) = 0.16</td><td>&gt;</td><td/><td colspan=\"2\">P(approves) = 0.04</td><td/></tr><tr><td>LM</td><td/><td/><td/><td>LM</td><td/><td/><td/></tr><tr><td>The</td><td>boys</td><td>at</td><td>Pat</td><td>The</td><td>boys</td><td>at</td><td>Pat</td></tr><tr><td>0.02</td><td>0.25</td><td>0.01</td><td>-0.12</td><td>0.01</td><td>-0.13</td><td>0.02</td><td>0.14</td></tr><tr><td/><td/><td/><td>&gt;</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"2\">SUBJECT ATTRIBUTION TASK</td><td/><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF3": {
"num": null,
"text": "Model perplexities",
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF5": {
"num": null,
"text": "",
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF6": {
"num": null,
"text": "",
"html": null,
"content": "<table><tr><td colspan=\"2\">NounPP PS Attributions</td><td/><td colspan=\"2\">NounPP SP Attributions</td><td/><td colspan=\"2\">NounPP PS Attributions</td><td/><td colspan=\"2\">NounPP SP Attributions</td><td/><td colspan=\"2\">NounPP PS Attributions</td></tr><tr><td>0.21</td><td>-0.18</td><td>bias</td><td>-0.44</td><td>-0.38</td><td>bias</td><td>-0.3</td><td>-0.26</td><td>bias</td><td>-2.75</td><td>-2.67</td><td>bias</td><td>-2.82</td><td>-2.65</td></tr><tr><td>0.34</td><td>0.28</td><td>det</td><td>0.51</td><td>0.43</td><td>det</td><td>0.44</td><td>0.38</td><td>det</td><td>0.27</td><td>-0.03</td><td>det</td><td>0.18</td><td>0.19</td></tr><tr><td>0.35</td><td>0.3</td><td>subjS</td><td>0.39</td><td>0.33</td><td>subjP</td><td>0.37</td><td>0.33</td><td>subjS</td><td>0.02</td><td>-0.06</td><td>subjP</td><td>-0.14</td><td>0.09</td></tr><tr><td>0.5</td><td>0.44</td><td>prep</td><td>0.42</td><td>0.36</td><td>prep</td><td>0.37</td><td>0.32</td><td>prep</td><td>-0.29</td><td>-0.2</td><td>prep</td><td>-0.08</td><td>-0.02</td></tr><tr><td>0.36</td><td>0.32</td><td>det</td><td>0.5</td><td>0.42</td><td>det</td><td>0.47</td><td>0.41</td><td>det</td><td>-0.12</td><td>-0.09</td><td>det</td><td>-0.04</td><td>0.0</td></tr><tr><td>0.41</td><td>0.35</td><td>nounP</td><td>0.51</td><td>0.43</td><td>nounS</td><td>0.49</td><td>0.42</td><td>nounP</td><td>0.04</td><td>0.15</td><td>nounS</td><td>0.37</td><td>0.38</td></tr><tr><td/><td/><td/><td colspan=\"2\">verbS verbP</td><td/><td colspan=\"2\">verbS verbP</td><td/><td colspan=\"2\">verbS verbP</td><td/><td colspan=\"2\">verbS verbP</td></tr><tr><td colspan=\"2\">(a) Example SHA-RNN attributions</td><td colspan=\"6\">(b) Aggregated SHA-RNN attributions</td><td colspan=\"6\">(c) Aggregated LSTM attributions</td></tr></table>",
"type_str": "table"
}
}
}
}