ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:57:01.234877Z"
},
"title": "What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Chicago",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon models. In this paper we introduce a suite of diagnostics drawn from human language experiments, which allow us to ask targeted questions about information used by language models for generating predictions in context. As a case study, we apply these diagnostics to the popular BERT model, finding that it can generally distinguish good from bad completions involving shared category or role reversal, albeit with less sensitivity than humans, and it robustly retrieves noun hypernyms, but it struggles with challenging inference and role-based event predictionand, in particular, it shows clear insensitivity to the contextual impacts of negation.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon models. In this paper we introduce a suite of diagnostics drawn from human language experiments, which allow us to ask targeted questions about information used by language models for generating predictions in context. As a case study, we apply these diagnostics to the popular BERT model, finding that it can generally distinguish good from bad completions involving shared category or role reversal, albeit with less sensitivity than humans, and it robustly retrieves noun hypernyms, but it struggles with challenging inference and role-based event predictionand, in particular, it shows clear insensitivity to the contextual impacts of negation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Pre-training of NLP models with a language modeling objective has recently gained popularity as a precursor to task-specific fine-tuning. Pretrained models like BERT (Devlin et al., 2019) and ELMo (Peters et al., 2018a) have advanced the state of the art in a wide variety of tasks, suggesting that these models acquire valuable, generalizable linguistic competence during the pre-training process. However, though we have established the benefits of language model pretraining, we have yet to understand what exactly about language these models learn during that process.",
"cite_spans": [
{
"start": 166,
"end": 187,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 197,
"end": 219,
"text": "(Peters et al., 2018a)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper aims to improve our understanding of what language models (LMs) know about language, by introducing a set of diagnostics targeting a range of linguistic capacities drawn from human psycholinguistic experiments. Because of their origin in psycholinguistics, these diagnostics have two distinct advantages: They are carefully controlled to ask targeted questions about linguistic capabilities, and they are designed to ask these questions by examining word predictions in context, which allows us to study LMs without any need for task-specific fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Beyond these advantages, our diagnostics distinguish themselves from existing tests for LMs in two primary ways. First, these tests have been chosen specifically for their capacity to reveal insensitivities in predictive models, as evidenced by patterns that they elicit in human brain responses. Second, each of these tests targets a set of linguistic capacities that extend beyond the primarily syntactic focus seen in existing LM diagnostics-we have tests targeting commonsense/pragmatic inference, semantic roles and event knowledge, category membership, and negation. Each of our diagnostics is set up to support tests of both word prediction accuracy and sensitivity to distinctions between good and bad context completions. Although we focus on the BERT model here as an illustrative case study, these diagnostics are applicable for testing of any language model. This paper makes two main contributions. First, we introduce a new set of targeted diagnostics for assessing linguistic capacities in language models. 1 Second, we apply these tests to shed light on strengths and weaknesses of the popular BERT model. We find that BERT struggles with challenging commonsense/pragmatic inferences and role-based event prediction; that it is generally robust on within-category distinctions and role reversals, but with lower sensitivity than humans; and that it is very strong at associating nouns with hypernyms. Most strikingly, however, we find that BERT fails completely to show generalizable understanding of negation, raising questions about the aptitude of LMs to learn this type of meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is important to be clear that in using these diagnostics, we are not testing whether LMs are psycholinguistically plausible. We are using these tests simply to examine LMs' general linguistic knowledge, specifically by asking what information the models are able to use when assigning probabilities to words in context. These psycholinguistic tests are well-suited to asking this type of question because a) the tests are designed for drawing conclusions based on predictions in context, allowing us to test LMs in their most natural setting, and b) the tests are designed in a controlled manner, such that accurate word predictions in context depend on particular types of information. In this way, these tests provide us with a natural means of diagnosing what kinds of information LMs have picked up on during training. Clarifying the linguistic knowledge acquired during LM-based training is increasingly relevant as state-of-the-art NLP models shift to be predominantly based on pre-training processes involving word prediction in context. In order to understand the fundamental strengths and limitations of these models-and in particular, to understand what allows them to generalize to many different taskswe need to understand what linguistic competence and general knowledge this LM-based pre-training makes available (and what it does not). The importance of understanding LM-based pre-training is also the motivation for examining pre-trained BERT, as we do in the present paper, despite the fact that the pre-trained form is typically used only as a starting point for fine-tuning. Because it is the pre-training that seemingly underlies the generalization power of the BERT model, allowing for simple fine-tuning to perform so impressively, it is the pre-trained model that presents the most important questions about the nature of generalizable linguistic knowledge in BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation for Use of Psycholinguistic Tests on Language Models",
"sec_num": "2"
},
{
"text": "This paper contributes to a growing effort to better understand the specific linguistic capacities achieved by neural NLP models. Some approaches use fine-grained classification tasks to probe information in sentence embeddings (Adi et al., 2016; Conneau et al., 2018; Ettinger et al., 2018) , or token-level and other sub-sentence level information in contextual embeddings (Tenney et al., 2019b; Peters et al., 2018b) . Some of this work has targeted specific linguistic phenomena such as function words (Kim et al., 2019) . Much work has attempted to evaluate systems' overall level of ''understanding'', often with tasks such as semantic similarity and entailment (Wang et al., 2018; Bowman et al., 2015; Agirre et al., 2012; Dagan et al., 2005; Bentivogli et al., 2016) , and additional work has been done to design curated versions of these tasks to test for specific linguistic capabilities (Dasgupta et al., 2018; Poliak et al., 2018; . Our diagnostics complement this previous work in allowing for direct testing of language models in their natural setting-via controlled tests of word prediction in context-without requiring probing of extracted representations or task-specific fine-tuning.",
"cite_spans": [
{
"start": 228,
"end": 246,
"text": "(Adi et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 247,
"end": 268,
"text": "Conneau et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 269,
"end": 291,
"text": "Ettinger et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 375,
"end": 397,
"text": "(Tenney et al., 2019b;",
"ref_id": "BIBREF34"
},
{
"start": 398,
"end": 419,
"text": "Peters et al., 2018b)",
"ref_id": "BIBREF31"
},
{
"start": 506,
"end": 524,
"text": "(Kim et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 668,
"end": 687,
"text": "(Wang et al., 2018;",
"ref_id": "BIBREF36"
},
{
"start": 688,
"end": 708,
"text": "Bowman et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 709,
"end": 729,
"text": "Agirre et al., 2012;",
"ref_id": "BIBREF1"
},
{
"start": 730,
"end": 749,
"text": "Dagan et al., 2005;",
"ref_id": "BIBREF8"
},
{
"start": 750,
"end": 774,
"text": "Bentivogli et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 898,
"end": 921,
"text": "(Dasgupta et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 922,
"end": 942,
"text": "Poliak et al., 2018;",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "More directly related is existing work on analyzing linguistic capacities of language models specifically. This work is particularly dominated by testing of syntactic awareness in LMs, and often mirrors the present work in using targeted evaluations modeled after psycholinguistic tests (Linzen et al., 2016; Gulordava et al., 2018; Marvin and Linzen, 2018; Wilcox et al., 2018; Chowdhury and Zamparelli, 2018; Futrell et al., 2019) . These analyses, like ours, typically draw conclusions based on LMs' output probabilities. Additional work has examined the internal dynamics underlying LMs' capturing of syntactic information, including testing of syntactic sensitivity in different components of the LM and at different timesteps within the sentence (Giulianelli et al., 2018) , or in individual units (Lakretz et al., 2019) .",
"cite_spans": [
{
"start": 287,
"end": 308,
"text": "(Linzen et al., 2016;",
"ref_id": "BIBREF25"
},
{
"start": 309,
"end": 332,
"text": "Gulordava et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 333,
"end": 357,
"text": "Marvin and Linzen, 2018;",
"ref_id": "BIBREF26"
},
{
"start": 358,
"end": 378,
"text": "Wilcox et al., 2018;",
"ref_id": "BIBREF37"
},
{
"start": 379,
"end": 410,
"text": "Chowdhury and Zamparelli, 2018;",
"ref_id": "BIBREF5"
},
{
"start": 411,
"end": 432,
"text": "Futrell et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 752,
"end": 778,
"text": "(Giulianelli et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 804,
"end": 826,
"text": "(Lakretz et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "This previous work analyzing language models focuses heavily on syntactic competence-semantic phenomena like negative polarity items are tested in some studies (Marvin and Linzen, 2018; Jumelet and Hupkes, 2018) , but the tested capabilities in these cases are still firmly rooted in the notion of detecting structural dependencies. In the present work we expand beyond the syntactic focus of the previous literature, testing for capacities including commonsense/pragmatic reasoning, semantic role and event knowledge, category membership, and negation-while continuing to use controlled, targeted diagnostics. Our tests are also distinct in eliciting a very specific response profile in humans, creating unique predictive challenges for models, as described subsequently.",
"cite_spans": [
{
"start": 160,
"end": 185,
"text": "(Marvin and Linzen, 2018;",
"ref_id": "BIBREF26"
},
{
"start": 186,
"end": 211,
"text": "Jumelet and Hupkes, 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "We further deviate from previous work analyzing LMs in that we not only compare word probabilities-we also examine word prediction accuracies directly, for a richer picture of models' specific strengths and weaknesses. Some previous work has used word prediction accuracy as a test of LMs' language understanding-the LAMBADA dataset (Paperno et al., 2016) , in particular, tests models' ability to predict the final word of a passage, in cases where the final sentence alone is insufficient for prediction. However, although LAMBADA presents a challenging prediction task, it is not well-suited to ask targeted questions about types of information used by LMs for prediction-unlike our tests, LAMBADA is not controlled to isolate and test the use of specific types of information in prediction. Our tests are thus unique in taking advantage of the additional information provided by testing word prediction accuracy, while also leveraging the benefits of controlled sentences that allow for asking targeted questions.",
"cite_spans": [
{
"start": 333,
"end": 355,
"text": "(Paperno et al., 2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Finally, our testing of BERT relates to a growing literature examining linguistic characteristics of the BERT model itself, to better understand what underlies the model's impressive performance. Clark et al. (2019) analyze the dynamics of BERT's selfattention mechanism, probing attention heads for syntactic sensitivity and finding that individual heads specialize strongly for syntactic and coreference relations. Lin et al. (2019) also examine syntactic awareness in BERT by syntactic probing at different layers, and by examination of syntactic sensitivity in the self-attention mechanism. Tenney et al. (2019a) test a variety of linguistic tasks at different layers of the BERT model. Most similarly to our work here, Goldberg (2019) tests BERT on several of the targeted syntactic evaluations described earlier for LMs, finding BERT to exhibit very strong performance on these measures. Our work complements these approaches in testing BERT's linguistic capacities directly via the word prediction mechanism, and in expanding beyond the syntactic tests used to examine BERT's predictions in Goldberg (2019) .",
"cite_spans": [
{
"start": 196,
"end": 215,
"text": "Clark et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 417,
"end": 434,
"text": "Lin et al. (2019)",
"ref_id": "BIBREF24"
},
{
"start": 1098,
"end": 1113,
"text": "Goldberg (2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "The power in our diagnostics stems from their origin in psycholinguistic studies-the items have been carefully designed for studying specific aspects of language processing, and each test has been shown to produce informative patterns of results when tested on humans. In this section we provide relevant background on human language processing, and explain how we use this information to choose the particular tests used here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Leveraging Human Studies",
"sec_num": "4"
},
{
"text": "To study language processing in humans, psycholinguists often test human responses to words in context, in order to better understand the information that our brains use to generate predictions. In particular, there are two types of predictive human responses that are relevant to us here:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Prediction in Humans",
"sec_num": "4.1"
},
{
"text": "Cloze Probability The first measure of human expectation is a measure of the ''cloze'' response. In a cloze task, humans are given an incomplete sentence and tasked with filling their expected word in the blank. ''Cloze probability'' of a word w in context c refers to the proportion of people who choose w to complete c. We will treat this as the best available gold standard for human prediction in context-humans completing the cloze task typically are not under any time pressure, so they have the opportunity to use all available information from the context to arrive at a prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Prediction in Humans",
"sec_num": "4.1"
},
{
"text": "The second measure of human expectation is a brain response known as the N400, which is detected by measuring electrical activity at the scalp (by electroencephalography). Like cloze, the N400 can be used to gauge how expected a word w is in a context c-the amplitude of the N400 response appears to be sensitive to fit of a word in context, and has been shown to correlate with cloze in many cases (Kutas and Hillyard, 1984) . The N400 has also been shown to be predicted by LM probabilities (Frank et al., 2013) . However, the N400 differs from cloze in being a real-time response that occurs only 400 milliseconds into the processing of a word. Accordingly, the expectations reflected in the N400 sometimes deviate from the more fully formed expectations reflected in the untimed cloze response.",
"cite_spans": [
{
"start": 399,
"end": 425,
"text": "(Kutas and Hillyard, 1984)",
"ref_id": "BIBREF21"
},
{
"start": 493,
"end": 513,
"text": "(Frank et al., 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N400 Amplitude",
"sec_num": null
},
{
"text": "The test sets that we use here are all drawn from human studies that have revealed divergences between cloze and N400 profiles-that is, for each of these tests, the N400 response suggests a level of insensitivity to certain information when computing expectations, causing a deviation from the fully informed cloze predictions. We choose these as our diagnostics because they provide built-in sensitivity tests targeting the types of information that appear to have reduced effect on the N400-and because they should present particularly challenging prediction tasks, tripping up models that fail to use the full set of available information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Diagnostic Tests",
"sec_num": "4.2"
},
{
"text": "Each of our diagnostics supports three types of testing: word prediction accuracy, sensitivity testing, and qualitative prediction analysis. Because these items are designed to draw conclusions about human processing, each set is carefully constructed to constrain the information relevant for making word predictions. This allows us to examine how well LMs use this target information. For word prediction accuracy, we use the most expected items from human cloze probabilities as the gold completions. 2 These represent predictions that models should be able to make if they access and apply all relevant context information when generating probabilities for target words.",
"cite_spans": [
{
"start": 504,
"end": 505,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "For sensitivity testing, we compare model probabilities for good versus bad completionsspecifically, comparisons on which the N400 showed reduced sensitivity in experiments. This allows us to test whether LMs will show similar insensitivities on the relevant linguistic distinctions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "Finally, because these items are constructed in such a controlled manner, qualitative analysis of models' top predictions can be highly informative about information being applied for prediction. We leverage this in our experiments detailed in Sections 6-9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "In all tests, the target word to be predicted falls in the final position of the provided context, which means that these tests should function similarly for either left-to-right or bidirectional LMs. Similarly, because these tests require only that a model can produce token probabilities in context, they are equally applicable to the masked LM setting of BERT as to a standard LM. In anticipation of testing the BERT model, and to facilitate fair future comparisons with the present results, we filter out items for which the expected word is not in BERT's single-word vocabulary, to ensure that all expected words can be predicted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "It is important to acknowledge that these are small test sets, limited in size due to their origin in psycholinguistic studies. However, because these sets have been hand-designed by cognitive scientists to test predictive processing in humans, their value is in the targeted assessment that they provide with respect to information that LMs use in prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "We now we describe each test set in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "Our first set targets commonsense and pragmatic inference, and tests sensitivity to differences within semantic category. The left column of Table 1 shows examples of these items, each of which consists of two sentences. These items come from an influential human study by Federmeier and Kutas (1999) , which tested how brains would respond to different types of context completions, shown in the right columns of Table 1 .",
"cite_spans": [
{
"start": 288,
"end": 300,
"text": "Kutas (1999)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 1",
"ref_id": null
},
{
"start": 414,
"end": 421,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "CPRAG-102: Commonsense and Pragmatic Inference",
"sec_num": "5.1"
},
{
"text": "Information Needed for Prediction Accurate prediction on this set requires use of commonsense reasoning to infer what is being described in the first sentence, and pragmatic reasoning to determine how the second sentence relates. For instance, in Table 1 , commonsense knowledge informs us that red color left by kisses suggests lipstick, and pragmatic reasoning allows us to infer that the thing to stop wearing is related to the complaint. As in LAMBADA, the final sentence is generic, not supporting prediction on its own. Unlike LAMBADA, the consistent structure of these items allows us to target specific model capabilities; 3 additionally, none of these items contain the target word in context, 4 forcing models to use commonsense inference rather than coreference. Human cloze probabilities show a high level of agreement on appropriate completions for these items-average cloze probability for expected completions is .74.",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 254,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "CPRAG-102: Commonsense and Pragmatic Inference",
"sec_num": "5.1"
},
{
"text": "Sensitivity Test The Federmeier and Kutas (1999) study found that while the inappropriate completions (e.g., mascara, bracelet) had cloze probabilities of virtually zero (average cloze .004 and .001, respectively), the N400 showed some expectation for completions that shared a semantic category with the expected completion (e.g., mascara, by relation to lipstick). Our sensitivity test targets this distinction, testing whether LMs will favor inappropriate completions based on shared semantic category with expected completions.",
"cite_spans": [
{
"start": 36,
"end": 48,
"text": "Kutas (1999)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CPRAG-102: Commonsense and Pragmatic Inference",
"sec_num": "5.1"
},
{
"text": "Data The authors of the original study make available 40 of their contexts-we filter out six to accommodate BERT's single-word vocabulary, 5 for a final set of 34 contexts, 102 total items. 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CPRAG-102: Commonsense and Pragmatic Inference",
"sec_num": "5.1"
},
{
"text": "Our second set targets event knowledge and semantic role interpretation, and tests sensitivity to impact of role reversals. Information Needed for Prediction Accurate prediction on this set requires a model to interpret semantic roles from sentence syntax, and apply event knowledge about typical interactions between types of entities in the given roles. The set has reversals for each noun pair (shown in Table 2 ) so models must distinguish roles for each order.",
"cite_spans": [],
"ref_spans": [
{
"start": 407,
"end": 414,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "ROLE-88: Event Knowledge and Semantic Role Sensitivity",
"sec_num": "5.2"
},
{
"text": "The Chow et al. (2016) study found that although each completion (e.g., served)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sensitivity Test",
"sec_num": null
},
{
"text": "is good for only one of the noun orders and not the reverse, the N400 shows a similar level of expectation for the target completions regardless of noun order. Our sensitivity test targets this distinction, testing whether LMs will show similar difficulty distinguishing appropriate continuations based on word order and semantic role. Human cloze probabilities show strong sensitivity to the role reversal, with average cloze difference of .233 between good and bad contexts for a given completion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sensitivity Test",
"sec_num": null
},
{
"text": "Data The authors provide 120 sentences (60 pairs)-which we filter to 88 final items, removing pairs for which the best completion of either context is not in BERT's single-word vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sensitivity Test",
"sec_num": null
},
{
"text": "Our third set targets understanding of the meaning of negation, along with knowledge of category membership. Table 3 shows examples of these test items, which involve absence or presence of negation in simple sentences, with two different completions that vary in truth depending on the negation. These test items come from a human study by Fischler et al. (1983) , which examined how human expectations change with the addition of negation.",
"cite_spans": [
{
"start": 341,
"end": 363,
"text": "Fischler et al. (1983)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 109,
"end": 116,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "NEG-136: Negation",
"sec_num": "5.3"
},
{
"text": "Information Needed for Prediction Because the negative contexts in these items are highly unconstraining (A robin is not a ?), prediction accuracy is not a useful measure for the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NEG-136: Negation",
"sec_num": "5.3"
},
{
"text": "A robin is a bird tree A robin is not a bird tree Fischler et al. (1983) study found that although the N400 shows more expectation for true completions in affirmative sentences (e.g., A robin is a bird), it fails to adjust to negation, showing more expectation for false continuations in negative sentences (e.g., A robin is not a bird). Our sensitivity test targets this distinction, testing whether LMs will show similar insensitivity to impacts of negation. Note that here we use truth judgments rather than cloze probability as an indication of the quality of a completion.",
"cite_spans": [
{
"start": 50,
"end": 72,
"text": "Fischler et al. (1983)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Match Mismatch",
"sec_num": null
},
{
"text": "Data Fischler et al. provide the list of 18 subject nouns and 9 category nouns that they use for their sentences, which we use to generate a comparable dataset, for a total of 72 items. 7 We refer to these 72 simple sentences as NEG-136-SIMP. All target words are in BERT's single-word vocabulary.",
"cite_spans": [
{
"start": 186,
"end": 187,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Match Mismatch",
"sec_num": null
},
{
"text": "Supplementary Items In a subsequent study, Nieuwland and Kuperberg (2008) followed up on the Fischler et al. (1983) experiment, creating affirmative and negative sentences chosen to be more ''natural ... for somebody to say'', and contrasting these with affirmative and negative sentences chosen to be less natural. ''Natural'' items include examples like Most smokers find that quitting is (not) very (difficult/easy), while items designed to be less natural include examples like Vitamins and proteins are (not) very (good/bad). The authors share 16 base contexts, corresponding to 64 additional items, which we add to the original 72 for additional comparison. All target words are in BERT's single-word vocabulary. We refer to these supplementary 64 items, designed to test effects of naturalness, as NEG-136-NAT.",
"cite_spans": [
{
"start": 93,
"end": 115,
"text": "Fischler et al. (1983)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Match Mismatch",
"sec_num": null
},
{
"text": "As a case study, we use these three diagnostics to examine the predictive capacities of the pretrained BERT model (Devlin et al., 2019) , which has been the basis of impressive performance across a wide range of tasks. BERT is a deep bidirectional transformer network (Vaswani et al., 2017) pre-trained on tasks of masked language modeling (predicting masked words given bidirectional context) and next-sentence prediction (binary classification of whether two sentences are a sequence). We test two versions of the pretrained model: BERT BASE and BERT LARGE (uncased). These versions have the same basic architecture, but BERT LARGE has more parameters-in total, BERT BASE has 110M parameters, and BERT LARGE has 340M. We use the PyTorch BERT implementation with masked language modeling parameters for generating word predictions. 8 For testing, we process our sentence contexts to have a [MASK] token-also used during BERT's pre-training-in the target position of interest. We then measure BERT's predictions for this [MASK] token's position. Following Goldberg (2019), we also add a [CLS] token to the start of each sentence to mimic BERT's training conditions.",
"cite_spans": [
{
"start": 114,
"end": 135,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 268,
"end": 290,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF35"
},
{
"start": 833,
"end": 834,
"text": "8",
"ref_id": null
},
{
"start": 1021,
"end": 1027,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "BERT differs from traditional left-to-right language models, and from real-time human predictions, in being a bidirectional model able to use information from both left and right context. This difference should be neutralized by the fact that our items provide all information in the left context-however, in our experiments here, we do allow one advantage for BERT's bidirectionality:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "We include a period and a [SEP] token after each [MASK] token, to indicate that the target position is followed by the end of the sentence. We do this in order to give BERT the best possible chance of success, by maximizing the chance of predicting a single word rather than the start of a phrase. Items for these experiments thus appear as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "[CLS] The restaurant owner forgot which customer the waitress had [MASK] .",
"cite_spans": [
{
"start": 66,
"end": 72,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "[SEP]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Trunc Shuf + Trunc BERT BASE k = 1 23.5 14.1 \u00b1 3.1 14.7 8.1 \u00b1 3.4 BERT LARGE k = 1 35.3 17.4 \u00b1 3.5 17.6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shuf",
"sec_num": null
},
{
"text": "10.0 \u00b1 3.0 BERT BASE k = 5 52.9 36.1 \u00b1 2.8 35.3 22.1 \u00b1 3.2 BERT LARGE k = 5 52.9 39.2 \u00b1 3.9 32.4 21.3 \u00b1 3.7 Table 4 : CPRAG-102 word prediction accuracies (with and without sentence perturbations). Shuf = first sentence shuffled; Trunc = second sentence truncated to two words before target.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Shuf",
"sec_num": null
},
{
"text": "Logits produced by the language model for the target position are softmax-transformed to obtain probabilities comparable to human cloze probability values for those target positions. 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shuf",
"sec_num": null
},
{
"text": "First we report BERT's results on the CPRAG-102 test targeting common sense, pragmatic reasoning, and sensitivity within semantic category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for CPRAG-102",
"sec_num": "7"
},
{
"text": "We define accuracy as percentage of items for which the ''expected'' completion is among the model's top k predictions, with k = 1 and k = 5. Table 4 (''Orig'') shows accuracies of BERT BASE and BERT LARGE . For accuracy at k = 1, BERT LARGE soundly outperforms BERT BASE with correct predictions on just over a third of contexts. Expanding to k = 5, the models converge on the same accuracy, identifying the expected completion for about half of contexts. 10 Because commonsense and pragmatic reasoning are non-trivial concepts to pin down, it is worth asking to what extent BERT can achieve this performance based on simpler cues like word identities or n-gram context. To test importance of word order, we shuffle the words in each item's first sentence, garbling the message but leaving all individual words intact (''Shuf'' in Table 4 ). To test adequacy of n-gram context, we truncate the second sentence, removing all but the two words preceding the target word (''Trunc'')-leaving Prefer good w/ .01 thresh BERT BASE 73.5 44.1 BERT LARGE 79.4 58.8 Table 5 : Percent of CPRAG-102 items with good completion assigned higher probability than bad.",
"cite_spans": [
{
"start": 457,
"end": 459,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 4",
"ref_id": null
},
{
"start": 832,
"end": 839,
"text": "Table 4",
"ref_id": null
},
{
"start": 1056,
"end": 1063,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Prediction Accuracies",
"sec_num": "7.1"
},
{
"text": "generally enough syntactic context to identify the part of speech, as well as some sense of semantic category (on top of the thematic setup of the first sentence), but removing other information from that second sentence. We also test with both perturbations together (''Shuf + Trunc''). Because different shuffled word orders give rise to different results, for the ''Shuf'' and ''Shuf + Trunc'' settings we show mean and standard deviation from 100 runs. Table 4 shows the accuracies as a result of these perturbations. One thing that is immediately clear is that the BERT model is indeed making use of information provided by the word order of the first sentence, and by the more distant content of the second sentence, as each of these individual perturbations causes a notable drop in accuracy. It is worth noting, however, that with each perturbation there is a subset of items for which BERT's accuracy remains intact. Unsurprisingly, many of these items are those containing particularly distinctive words associated with the target, such as checkmate (chess), touchdown (football), and stone-washed (jeans). This suggests that some of BERT's success on these items may be attributable to simpler lexical or n-gram information. In Section 7.3 we take a closer look at some more difficult items that seemingly avoid such loopholes.",
"cite_spans": [],
"ref_spans": [
{
"start": 457,
"end": 464,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Prediction Accuracies",
"sec_num": "7.1"
},
{
"text": "Next we test BERT's ability to prefer expected completions over inappropriate completions of the same semantic category. We first test this by simply measuring the percentage of items for which BERT assigns a higher probability to the good completion (e.g., lipstick from Table 1) than to either of the inappropriate completions (e.g., mascara, bracelet). Table 5 shows the results. We see that BERT BASE assigns the highest probability to the expected completion in 73.5% of items, whereas BERT LARGE does so for 79.4%-a solid majority, but with a clear portion of items for which an inappropriate, semantically related target does receive a higher probability than the appropriate word.",
"cite_spans": [],
"ref_spans": [
{
"start": 356,
"end": 363,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Completion Sensitivity",
"sec_num": "7.2"
},
{
"text": "We can make our criterion slightly more stringent if we introduce a threshold on the probability difference. The average cloze difference between good and bad completions is about .74 for the data from which these items originate, reflecting a very strong human sensitivity to the difference in completion quality. To test the proportion of items in which BERT assigns more substantially different probabilities, we filter to items for which the good completion probability is higher by greater than .01-a threshold chosen to be very generous given the significant average cloze difference. With this threshold, the sensitivity drops noticeably-BERT BASE shows sensitivity in only 44.1% of items, and BERT LARGE shows sensitivity in only 58.8%. These results tell us that although the models are able to prefer good completions to same-category bad completions in a majority of these items, the difference is in many cases very small, suggesting that this sensitivity falls short of what we see in human cloze responses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Completion Sensitivity",
"sec_num": "7.2"
},
{
"text": "We thus see that the BERT models are able to identify the correct word completions in approx- 14.8 Table 7 : ROLE-88 word prediction accuracies (with and without sentence perturbations). -Obj = generic object; -Subj = generic subject; -Both = generic object and subject.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Examination of Predictions",
"sec_num": "7.3"
},
{
"text": "imately half of CPRAG-102 items, and that the models are able to prefer good completions to semantically related inappropriate completions in a majority of items, though with notably weaker sensitivity than humans. To better understand the models' weaknesses, in this section we examine predictions made when the models fail. Table 6 shows three example items along with the top five predictions of BERT LARGE . In each case, BERT provides completions that are sensible in the context of the second sentence, but that fail to take into account the context provided by the first sentence-in particular, the predictions show no evidence of having been able to infer the relevant information about the situation or object described in the first sentence. For instance, we see in the first example that BERT has correctly zeroed in on things that one might borrow, but it fails to infer that the thing to be borrowed is something to be used for cutting lumber. Similarly, BERT's failure to detect the snow-shoveling theme of the second item makes for an amusing set of non sequitur completions. Finally, the third example shows that BERT has identified an animal theme (unsurprising, given the words zoo and animal), but it is not applying 24.0 26.1 21.7 41.1 BERT LARGE k=5 28.0 34.8 39.1 52.9 Table 8 : Accuracy of predictions in unperturbed ROLE-88 sentences, binned by max cloze of context. the phrase black and white stripes to identify the appropriate completion of zebra. Altogether, these examples illustrate that with respect to the target capacities of commonsense inference and pragmatic reasoning, BERT fails in these more challenging cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 326,
"end": 333,
"text": "Table 6",
"ref_id": "TABREF5"
},
{
"start": 1291,
"end": 1298,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Examination of Predictions",
"sec_num": "7.3"
},
{
"text": "Next we turn to the ROLE-88 test of semantic role sensitivity and event knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for ROLE-88",
"sec_num": "8"
},
{
"text": "We again define accuracy by presence of a top cloze item within the model's top k predictions. Table 7 (''Orig'') shows the accuracies for BERT LARGE and BERT BASE . For k = 1, accuracies are very low, with BERT BASE slightly outperforming BERT LARGE . When we expand to k = 5, accuracies predictably increase, and BERT LARGE now outperforms BERT BASE by a healthy margin.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Prediction Accuracies",
"sec_num": "8.1"
},
{
"text": "To test the extent to which BERT relies on the individual nouns in the context, we try two different perturbations of the contexts: removing the information from the object (which customer the waitress ...), and removing the information from the subject (which customer the waitress...), in each case by replacing the noun with a generic substitute. We choose one and other as substitutions for the object and subject, respectively. Table 7 shows the results with each of these perturbations individually and together. We observe several notable patterns. First, removing either the object (''-Obj'') or the subject (''-Sub'') has relatively little effect on the accuracy of BERT BASE for either k = 1 or k = 5. This is quite different from what we see with BERT LARGE , the accuracy of which drops substantially when the object or subject information is removed. These Prefer good w/ .01 thresh BERT BASE 75.0 31.8 BERT LARGE 86.4 43.2 Table 9 : Percent of ROLE-88 items with good completion assigned higher probability than role reversal.",
"cite_spans": [],
"ref_spans": [
{
"start": 433,
"end": 440,
"text": "Table 7",
"ref_id": null
},
{
"start": 937,
"end": 944,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Prediction Accuracies",
"sec_num": "8.1"
},
{
"text": "patterns suggest that BERT BASE is less dependent upon the full detail of the subject-object structure, instead relying primarily upon one or the other of the participating nouns for its verb predictions. BERT LARGE , on the other hand, appears to make heavier use of both nouns, such that loss of either one causes non-trivial disruption in the predictive accuracy. It should be noted that the items in this set are overall less constraining than those in Section 7-humans converge less clearly on the same predictions, resulting in lower average cloze values for the best completions. To investigate the effect of constraint level, we divide items into four bins by top cloze value per sentence. Table 8 shows the results. With the exception of BERT BASE at k = 1, for which accuracy in all bins is fairly low, it is clear that the highest cloze bin yields much higher model accuracies than the other three bins, suggesting some alignment between how constraining contexts are for humans and how constraining they are for BERT. However, even in the highest cloze bin, when at least a third of humans converge on the same completion, even BERT LARGE at k = 5 is only correct in half of cases, suggesting substantial room for improvement. 11",
"cite_spans": [],
"ref_spans": [
{
"start": 698,
"end": 705,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Prediction Accuracies",
"sec_num": "8.1"
},
{
"text": "Next we test BERT's sensitivity to role reversals by comparing model probabilities for a given completion (e.g., served) in the appropriate versus role-reversed contexts. We again start by testing the percentage of items for which BERT assigns a higher probability to the appropriate than to the inappropriate completion. As we see in Table 9 , BERT BASE prefers the good continuation in 75% of items, whereas BERT LARGE does so for 86.4%-comparable to the proportions for CPRAG-102. However, when we apply our Context BERT BASE predictions BERT LARGE predictions the camper reported which girl the bear had taken, killed, attacked, bitten, picked attacked, killed, eaten, taken, targeted the camper reported which bear the girl had taken , killed, fallen, bitten, jumped taken, left, entered, found, chosen the restaurant owner forgot which customer the waitress had served, hired, brought, been, taken served, been, delivered, mentioned, brought the restaurant owner forgot which waitress the customer had served, been, chosen, ordered, hired served, chosen, called, ordered, been 38.9 BERT LARGE k = 1 44.4 BERT BASE k = 5 100 BERT LARGE k = 5 100 Table 11 : Accuracy of word predictions in NEG-136-SIMP affirmative sentences.",
"cite_spans": [
{
"start": 608,
"end": 688,
"text": "taken, killed, attacked, bitten, picked attacked, killed, eaten, taken, targeted",
"ref_id": null
},
{
"start": 742,
"end": 810,
"text": ", killed, fallen, bitten, jumped taken, left, entered, found, chosen",
"ref_id": null
},
{
"start": 879,
"end": 950,
"text": "hired, brought, been, taken served, been, delivered, mentioned, brought",
"ref_id": null
},
{
"start": 1011,
"end": 1085,
"text": "served, been, chosen, ordered, hired served, chosen, called, ordered, been",
"ref_id": null
}
],
"ref_spans": [
{
"start": 335,
"end": 342,
"text": "Table 9",
"ref_id": null
},
{
"start": 689,
"end": 741,
"text": "the camper reported which bear the girl had taken",
"ref_id": null
},
{
"start": 1154,
"end": 1162,
"text": "Table 11",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Completion Sensitivity",
"sec_num": "8.2"
},
{
"text": "threshold of .01 (still generous given the average cloze difference of .233), sensitivity drops more dramatically than on CPRAG-102, to 31.8% and 43.2%, respectively. Overall, these results suggest that BERT is, in a majority of cases of this kind, able to use noun position to prefer good verb completions to bad-however, it is again less sensitive than humans to these distinctions, and it fails to match human word predictions on a solid majority of cases. The model's ability to choose good completions over role reversals (albeit with weak sensitivity) suggests that the failures on word prediction accuracy are not due to inability to distinguish word orders, but rather to a weakness in event knowledge or understanding of semantic role implications. Table 10 shows predictions of BERT BASE and BERT LARGE for some illustrative examples. For the girl/bear items, we see that BERT BASE favors continuations like killed and bitten with bear as subject, but also includes these continuations with girl as subject. BERT LARGE , by contrast, excludes these continuations when girl is the subject.",
"cite_spans": [],
"ref_spans": [
{
"start": 758,
"end": 766,
"text": "Table 10",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Completion Sensitivity",
"sec_num": "8.2"
},
{
"text": "In the second pair of sentences we see that the models choose served as the top continuation Affirmative Negative BERT BASE 100 0.0 BERT LARGE 100 0.0 Table 12 : Percent of NEG-136-SIMP items with true completion assigned higher probability than false.",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 159,
"text": "Table 12",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Qualitative Examination of Predictions",
"sec_num": "8.3"
},
{
"text": "under both word orders, even though for the second word order this produces an unlikely scenario. In both cases, the model's assigned probability for served is much higher for the appropriate word order than the inappropriate one-a difference of .6 for BERT LARGE and .37 for BERT BASE -but it is noteworthy that no more semantically appropriate top continuation is identified by either model for which waitress the customer had . As a final note, although the continuations are generally impressively grammatical, we see exceptions in the second bear/girl sentenceboth models produce completions of questionable grammaticality (or at least questionable use of selection restrictions), with sentences like which bear the girl had fallen from BERT BASE , and which bear the girl had entered from BERT LARGE .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Examination of Predictions",
"sec_num": "8.3"
},
{
"text": "Finally, we turn to the NEG-136 test of negation and category membership.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for NEG-136",
"sec_num": "9"
},
{
"text": "We start by testing the ability of BERT to predict correct category continuations for the affirmative contexts in NEG-136-SIMP. Table 14 : Percent of NEG-136-NAT with true continuation given higher probability than false. Aff = affirmative; Neg = negative; NT = natural; LN = less natural.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 136,
"text": "Table 14",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Prediction Accuracies",
"sec_num": "9.1"
},
{
"text": "We see that for k = 5, the correct category is predicted for 100% of affirmative items, suggesting an impressive ability of both BERT models to associate nouns with their correct immediate hypernyms. We also see that the accuracy drops substantially when assessed on k = 1. Examination of predictions reveals that these errors consist exclusively of cases in which BERT completes the sentence with a repetition of the subject noun, e.g., A daisy is a daisy-which is certainly true, but which is not a likely or informative sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Prediction Accuracies",
"sec_num": "9.1"
},
{
"text": "We next assess BERT's sensitivity to the meaning of negation, by measuring the proportion of items in which the model assigns higher probabilities to true completions than to false ones. Table 12 shows the results, and the pattern is stark. When the statement is affirmative (A robin is a ), the models assign higher probability to the true completion in 100% of items. Even with the threshold of .01-which eliminated many comparisons on CPRAG-102 and ROLE-88-all items pass but one (for BERT BASE ), suggesting a robust preference for the true completions.",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 195,
"text": "Table 12",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Completion Sensitivity",
"sec_num": "9.2"
},
{
"text": "However, in the negative statements (A robin is not a ), BERT prefers the true completion in 0% of items, assigning the higher probability to the false completion in every case. This shows a strong insensitivity to the meaning of negation, with BERT preferring the category match completion every time, despite its falsity. Table 13 shows examples of the predictions made by BERT LARGE in positive and negative contexts. We see a clear illustration of the phenomenon suggested by the earlier results: For affirmative sentences, BERT produces generally true completions (at least in the top two)-but these completions remain largely unchanged after negation is added, resulting in many blatantly untrue completions.",
"cite_spans": [],
"ref_spans": [
{
"start": 324,
"end": 332,
"text": "Table 13",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Completion Sensitivity",
"sec_num": "9.2"
},
{
"text": "Another interesting phenomenon that we can observe in Table 13 is BERT's sensitivity to the nature of the determiner (a or an) preceding the masked word. This determiner varies depending on whether the upcoming target begins with a vowel or a consonant (for instance, our mismatched category paired with hammer is insect) and so the model can potentially use this cue to filter the predictions to those starting with either vowels or consonants. How effectively does BERT use this cue? The predictions indicate that BERT is for the most part extremely good at using this cue to limit to words that begin with the right type of letter. There are certain exceptions (e.g., An ant is not a ant), but these are in the minority.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 62,
"text": "Table 13",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Qualitative Examination of Predictions",
"sec_num": "9.3"
},
{
"text": "The supplementary NEG-136-NAT items allow us to examine further the model's handling of negation, with items designed to test the effect of ''naturalness''. When we present BERT with this new set of sentences, the model does show an apparent change in sensitivity to the negation. BERT BASE assigns true statements higher",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Increasing Naturalness",
"sec_num": "9.4"
},
{
"text": "Most smokers find that quitting is very difficult, easy, effective, dangerous, hard Most smokers find that quitting isn't very effective, easy, attractive, difficult, succcessful A fast food dinner on a first date is very good, nice, common, romantic, attractive A fast food dinner on a first date isn't very nice, good, romantic, appealing, exciting Table 15 : BERT LARGE top word predictions for selected NEG-136-NAT sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 351,
"end": 359,
"text": "Table 15",
"ref_id": null
}
],
"eq_spans": [],
"section": "Context BERT LARGE predictions",
"sec_num": null
},
{
"text": "probability than false for 75% of natural sentences (''NT''), and BERT LARGE does so for 87.5% of natural sentences. By contrast, the models each show preference for true statements in only 37.5% of items designed to be less natural (''LN''). Table 14 shows these sensitivities broken down by affirmative and negative conditions. Here we see that in the natural sentences, BERT prefers true statements for both affirmative and negative contexts-by contrast, the less natural sentences show the pattern exhibited on NEG-136-SIMP, in which BERT prefers true statements in a high proportion of affirmative sentences, and in 0% of negative sentences, suggesting that once again BERT is defaulting to category matches with the subject. Table 15 contains BERT LARGE predictions on two pairs of sentences from the ''Natural'' sentence set. It is worth noting that even when BERT's first prediction is appropriate in the context, the top candidates often contradict each other (e.g., difficult and easy). We also see that even with these natural items, sometimes the negation is not enough to reverse the top completions, as with the second pair of sentences, in which the fast food dinner both is and isn't a romantic first date.",
"cite_spans": [],
"ref_spans": [
{
"start": 243,
"end": 251,
"text": "Table 14",
"ref_id": null
},
{
"start": 731,
"end": 739,
"text": "Table 15",
"ref_id": null
}
],
"eq_spans": [],
"section": "Context BERT LARGE predictions",
"sec_num": null
},
{
"text": "Our three diagnostics allow for a clarified picture of the types of information used for predictions by pre-trained BERT models. On CPRAG-102, we see that both models can predict the best completion approximately half the time (at k = 5), and that both models rely non-trivially on word order and full sentence context. However, successful predictions in the face of perturbations also suggest that some of BERT's success on these items may exploit loopholes, and when we examine predictions on challenging items, we see clear weaknesses in the commonsense and pragmatic inferences targeted by this set. Sensitivity tests show that BERT can also prefer good completions to bad semantically related completions in a majority of items, but many of these probability differences are very small, suggesting that the model's sensitivity is much less than that of humans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "10"
},
{
"text": "On ROLE-88, BERT's accuracy in matching top human predictions is much lower, with BERT LARGE at only 37.5% accuracy. Perturbations reveal interesting model differences, suggesting that BERT LARGE has more sensitivity than BERT BASE to the interaction between subject and object nouns. Sensitivity tests show that both models are typically able to use noun position to prefer good completions to role reversals, but the differences are on average even smaller than on CPRAG-102, indicating again that model sensitivity to the distinctions is less than that of humans. The models' general ability to distinguish role reversals suggests that the low word prediction accuracies are not due to insensitivity to word order per se, but rather to weaknesses in event knowledge or understanding of semantic role implications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "10"
},
{
"text": "Finally, NEG-136 allows us to zero in with particular clarity on a divergence between BERT's predictive behavior and what we might expect from a model using all available information about word meaning and truth/falsity. When presented with simple sentences describing category membership, BERT shows a complete inability to prefer true over false completions for negative sentences. The model shows an impressive ability to associate subject nouns with their hypernyms, but when negation reverses the truth of those hypernyms, BERT continues to predict them nonetheless. By contrast, when presented with sentences that are more ''natural'', BERT does reliably prefer true completions to false, with or without negation. Although these latter sentences are designed to differ in naturalness, in all likelihood it is not naturalness per se that drives the model's relative success on them-but rather a higher frequency of these types of statements in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "10"
},
{
"text": "The latter result in particular serves to highlight a stark, but ultimately unsurprising, observation about what these pre-trained language models bring to the table. Whereas the function of language processing for humans is to compute meaning and make judgments of truth, language models are trained as predictive models-they will simply leverage the most reliable cues in order to optimize their predictive capacity. For a phenomenon like negation, which is often not conducive to clear predictions, such models may not be equipped to learn the implications of this word's meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "10"
},
{
"text": "In this paper we have introduced a suite of diagnostic tests for language models to better our understanding of the linguistic competencies acquired by pre-training via language modeling. We draw our tests from psycholinguistic studies, allowing us to target a range of linguistic capacities by testing word prediction accuracies and sensitivity of model probabilities to linguistic distinctions. As a case study, we apply these tests to analyze strengths and weaknesses of the popular BERT model, finding that it shows sensitivity to role reversal and same-category distinctions, albeit less than humans, and it succeeds with noun hypernyms, but it struggles with challenging inferences and role-based event prediction-and it shows clear failures with the meaning of negation. We make all test sets and experiment code available (see Footnote 1), for further experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "11"
},
{
"text": "The capacities targeted by these test sets are by no means comprehensive, and future work can build on the foundation of these datasets to expand to other aspects of language processing. Because these sets are small, we must also be conservative in the strength of our conclusions-different formulations may yield different performance, and future work can expand to verify the generality of these results. In parallel, we hope that the weaknesses highlighted by these diagnostics can help to identify areas of need for establishing robust and generalizable models for language understanding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "11"
},
{
"text": "With one exception, NEG-136, for which we use completion truth, as in the original study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To highlight this advantage, as a supplement for this test set we provide specific annotations of each item, indicating the knowledge/reasoning required to make the prediction.4 More than 80% of LAMBADA items contain the target word in the preceding context.5 For a couple of items, we also replace an inappropriate completion with another inappropriate completion of the same semantic category to accommodate BERT's vocabulary.6 Our ''item'' counts use all context/completion pairings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The one modification that we make to the original subject noun list is a substitution of the word salmon for bass within the category of fish-because bass created lexical ambiguity that was not interesting for our purposes here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/huggingface/pytorchpretrained-BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Human cloze probabilities are importantly different from true probabilities over a vocabulary, making these values not directly comparable. However, cloze provides important indication-the best indication we have-of how much a context constrains human expectations toward a continuation, so we do at times loosely compare these two types of values.10 Note that word accuracies are computed by context, so these accuracies are out of the 34 base contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This analysis is made possible by the Chow et al.(2016)authors' generous provision of the cloze data for these items, not originally made public with the items themselves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Tal Linzen, Kevin Gimpel, Yoav Goldberg, Marco Baroni, and several anon-ymous reviewers for valuable feedback on earlier versions of this paper. We also thank members of the Toyota Technological Institute at Chicago for useful discussion of these and related issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Finegrained analysis of sentence embeddings using auxiliary prediction tasks",
"authors": [
{
"first": "Yossi",
"middle": [],
"last": "Adi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Kermany",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Lavi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine- grained analysis of sentence embeddings using auxiliary prediction tasks. International Con- ference on Learning Representations.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "SemEval-2012 task 6: A pilot on semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "385--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 385-393.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "SICK through the SemEval glasses. Lesson learned from the evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment",
"authors": [
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2016,
"venue": "Language Resources and Evaluation",
"volume": "50",
"issue": "1",
"pages": "95--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luisa Bentivogli, Raffaella Bernardi, Marco Marelli, Stefano Menini, Marco Baroni, and Roberto Zamparelli. 2016. SICK through the SemEval glasses. Lesson learned from the eval- uation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. Language Resources and Evaluation, 50(1):95-124.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A 'bag-of-arguments' mechanism for initial verb predictions. Language",
"authors": [
{
"first": "Cybelle",
"middle": [],
"last": "Wing-Yee Chow",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Lau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Phillips",
"suffix": ""
}
],
"year": 2016,
"venue": "Cognition and Neuroscience",
"volume": "31",
"issue": "5",
"pages": "577--596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wing-Yee Chow, Cybelle Smith, Ellen Lau, and Colin Phillips. 2016. A 'bag-of-arguments' mechanism for initial verb predictions. Language, Cognition and Neuroscience, 31(5):577-596.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "RNN simulations of grammaticality judgments on long-distance dependencies",
"authors": [
{
"first": "Absar",
"middle": [],
"last": "Shammur",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Chowdhury",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "133--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shammur Absar Chowdhury and Roberto Zamparelli. 2018. RNN simulations of grammat- icality judgments on long-distance dependen- cies. In Proceedings of the 27th International Conference on Computational Linguistics, pages 133-144.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "What does BERT look at? An analysis of BERT's attention",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.04341"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? An analysis of BERT's atten- tion. arXiv preprint arXiv:1906.04341.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "What you can cram into a single vector: Probing sentence embeddings for linguistic properties",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Loic",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL 2018-56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2126--2136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lample, Loic Barrault, and Marco Baroni. 2018. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. In ACL 2018-56th Annual Meeting of the Association for Computational Linguistics, pages 2126-2136.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The PASCAL recognising textual entailment challenge",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Glickman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2005,
"venue": "Machine Learning Challenges Workshop",
"volume": "",
"issue": "",
"pages": "177--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising tex- tual entailment challenge. In Machine Learning Challenges Workshop, pages 177-190. Springer.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Evaluating compositionality in sentence embeddings",
"authors": [
{
"first": "Ishita",
"middle": [],
"last": "Dasgupta",
"suffix": ""
},
{
"first": "Demi",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stuhlm\u00fcller",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"J"
],
"last": "Gershman",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"D"
],
"last": "Goodman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 40th Annual Meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ishita Dasgupta, Demi Guo, Andreas Stuhlm\u00fcller, Samuel J. Gershman, and Noah D. Goodman. 2018. Evaluating compositionality in sentence embeddings. Proceedings of the 40th Annual Meeting of the Cognitive Science Society.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: Pretraining of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre- training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Assessing composition in sentence vector representations",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgohary",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1790--1801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sentence vector representations. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1790-1801.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A rose by any other name: Long-term memory structure and sentence processing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kara",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Federmeier",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kutas",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of memory and Language",
"volume": "41",
"issue": "4",
"pages": "469--495",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kara D. Federmeier and Marta Kutas. 1999. A rose by any other name: Long-term memory structure and sentence processing. Journal of memory and Language, 41(4):469-495.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Brain potentials related to stages of sentence verification",
"authors": [
{
"first": "Ira",
"middle": [],
"last": "Fischler",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"A"
],
"last": "Bloom",
"suffix": ""
},
{
"first": "Donald",
"middle": [
"G"
],
"last": "Childers",
"suffix": ""
},
{
"first": "Salim",
"middle": [
"E"
],
"last": "Roucos",
"suffix": ""
},
{
"first": "Nathan",
"middle": [
"W"
],
"last": "Perry",
"suffix": ""
}
],
"year": 1983,
"venue": "Psychophysiology",
"volume": "20",
"issue": "4",
"pages": "400--409",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ira Fischler, Paul A. Bloom, Donald G. Childers, Salim E. Roucos, and Nathan W. Perry Jr. 1983. Brain potentials related to stages of sentence verification. Psychophysiology, 20(4):400-409.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Word surprisal predicts N400 amplitude during reading",
"authors": [
{
"first": "Stefan",
"middle": [
"L"
],
"last": "Frank",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Leun",
"suffix": ""
},
{
"first": "Giulia",
"middle": [],
"last": "Otten",
"suffix": ""
},
{
"first": "Gabriella",
"middle": [],
"last": "Galli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vigliocco",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL 2013-51st Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference",
"volume": "2",
"issue": "",
"pages": "878--883",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan L. Frank, Leun J. Otten, Giulia Galli, and Gabriella Vigliocco. 2013. Word surprisal predicts N400 amplitude during reading. In ACL 2013-51st Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, volume 2, pages 878-883.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Neural language models as psycholinguistic subjects: Representations of syntactic state",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Morita",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "32--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psy- cholinguistic subjects: Representations of syn- tactic state. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 32-42.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information",
"authors": [
{
"first": "Mario",
"middle": [],
"last": "Giulianelli",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Harding",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Mohnert",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "240--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 240-248.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Assessing BERT's syntactic abilities",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.05287"
]
},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg. 2019. Assessing BERT's syntac- tic abilities. arXiv preprint arXiv:1901.05287.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Colorless green recurrent networks dream hierarchically",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Gulordava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1195--1205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1195-1205.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Do language models understand anything?",
"authors": [
{
"first": "Jaap",
"middle": [],
"last": "Jumelet",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "222--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaap Jumelet and Dieuwke Hupkes. 2018. Do language models understand anything? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 222-231.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Probing what different NLP tasks teach machines about function word comprehension",
"authors": [
{
"first": "Najoung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Roma",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)",
"volume": "",
"issue": "",
"pages": "235--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, and Ellie Pavlick. 2019. Probing what different NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 235-249.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Brain potentials during reading reflect word expectancy and semantic association",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Kutas",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"A"
],
"last": "Hillyard",
"suffix": ""
}
],
"year": 1984,
"venue": "Nature",
"volume": "307",
"issue": "5947",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Kutas and Steven A. Hillyard. 1984. Brain potentials during reading reflect word expectancy and semantic association. Nature, 307(5947):161.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Th\u00e9o Desbordes, Dieuwke Hupkes, Stanislas Dehaene, and",
"authors": [
{
"first": "Yair",
"middle": [],
"last": "Lakretz",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yair Lakretz, Germ\u00e1n Kruszewski, Th\u00e9o Desbordes, Dieuwke Hupkes, Stanislas Dehaene, and",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The emergence of number and syntax units in LSTM language models",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "11--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni. 2019. The emergence of number and syntax units in LSTM language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 11-20.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Open sesame: Getting inside BERT's linguistic knowledge",
"authors": [
{
"first": "Yongjie",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chern Tan",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.01698"
]
},
"num": null,
"urls": [],
"raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT's linguistic knowledge. arXiv preprint arXiv:1906.01698.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "521--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Trans- actions of the Association for Computational Linguistics, 4:521-535.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Targeted syntactic evaluation of language models",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Marvin",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1192--1202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empir- ical Methods in Natural Language Process- ing, pages 1192-1202.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3428--3448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural lan- guage inference. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3428-3448.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "When the truth is not too hard to handle: An event-related potential study on the pragmatics of negation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mante",
"suffix": ""
},
{
"first": "Gina",
"middle": [
"R"
],
"last": "Nieuwland",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kuperberg",
"suffix": ""
}
],
"year": 2008,
"venue": "Psychological Science",
"volume": "19",
"issue": "12",
"pages": "1213--1218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mante S. Nieuwland and Gina R. Kuperberg. 2008. When the truth is not too hard to handle: An event-related potential study on the pragmatics of negation. Psychological Science, 19(12):1213-1218.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The LAMBADA dataset: Word prediction requiring a broad discourse context",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Paperno",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Quan"
],
"last": "Pham",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Sandro",
"middle": [],
"last": "Pezzelle",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Fernandez",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1525--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Paperno, Germ\u00e1n Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceed- ings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1525-1534.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contex- tualized word representations. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, Volume 1 (Long Papers), volume 1, pages 2227-2237.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Dissecting contextual word embeddings: Architecture and representation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1499--1509",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissect- ing contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1499-1509.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Collecting diverse natural language inference problems for sentence representation evaluation",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Aparajita",
"middle": [],
"last": "Haldar",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "J",
"middle": [
"Edward"
],
"last": "Hu",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"Steven"
],
"last": "White",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "67--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018. Col- lecting diverse natural language inference prob- lems for sentence representation evaluation. In Proceedings of the 2018 Conference on Empir- ical Methods in Natural Language Processing, pages 67-81.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "BERT rediscovers the classical NLP pipeline",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.05950"
]
},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. arXiv preprint arXiv:1905.05950.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "What do you learn from context? Probing for sentence structure in contextualized word representations",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Berlin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "R",
"middle": [
"Thomas"
],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Najoung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? Probing for sen- tence structure in contextualized word repre- sentations. In Proceedings of the International Conference on Learning Representations.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language under- standing. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, page 353.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "What do RNN language models learn about filler-gap dependencies?",
"authors": [
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Morita",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "211--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell. 2018. What do RNN language models learn about filler-gap dependencies? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 211-221.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>Context</td><td>Expected Inappropriate</td></tr></table>",
"html": null,
"num": null,
"text": "He complained that after she kissed him, he couldn't get the red color off his face. He finally just asked her to stop wearing that lipstick mascara | bracelet He caught the pass and scored another touchdown. There was nothing he enjoyed more than a good game of football baseball | monopoly Table 1: Example items from CPRAG-102 dataset.",
"type_str": "table"
},
"TABREF1": {
"content": "<table><tr><td>Context</td><td>Compl.</td></tr><tr><td>the restaurant owner forgot which</td><td>served</td></tr><tr><td>customer the waitress had</td><td/></tr><tr><td>the restaurant owner forgot which</td><td>served</td></tr><tr><td>waitress the customer had</td><td/></tr></table>",
"html": null,
"num": null,
"text": "shows an example item pair from this set. These items come from a human experiment byChow et al. (2016), which tested the brain's sensitivity to role reversals.",
"type_str": "table"
},
"TABREF2": {
"content": "<table/>",
"html": null,
"num": null,
"text": "Example items from ROLE-88 dataset. Compl = Context Completion.",
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"html": null,
"num": null,
"text": "Example items from NEG-136-SIMP dataset.negative contexts. We test prediction accuracy for affirmative contexts only, which allows us to test models' use of hypernym information (robin = bird). Targeting of negation happens in the sensitivity test.",
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td>Context</td><td>BERT LARGE predictions</td></tr><tr><td/><td>note, letter, gun, blanket, newspaper</td></tr><tr><td>handed him a</td><td/></tr><tr><td>At the zoo, my sister asked if they painted the black and</td><td>cat, person, human, bird, species</td></tr><tr><td>white stripes on the animal. I explained to her that they</td><td/></tr><tr><td>were natural features of a</td><td/></tr></table>",
"html": null,
"num": null,
"text": "Pablo wanted to cut the lumber he had bought to make some shelves. He asked his neighbor if he could borrow hercar, house, room, truck, apartment The snow had piled up on the drive so high that they couldn't get the car out. When Albert woke up, his father",
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"html": null,
"num": null,
"text": "BERT LARGE top word predictions for selected CPRAG-102 items.",
"type_str": "table"
},
"TABREF8": {
"content": "<table><tr><td>Accuracy</td></tr><tr><td>BERT BASE k = 1</td></tr></table>",
"html": null,
"num": null,
"text": "BERT BASE and BERT LARGE top word predictions for selected ROLE-88 sentences.",
"type_str": "table"
},
"TABREF9": {
"content": "<table><tr><td>shows the</td></tr></table>",
"html": null,
"num": null,
"text": "",
"type_str": "table"
},
"TABREF10": {
"content": "<table><tr><td/><td colspan=\"4\">Aff. NT Neg. NT Aff. LN Neg. LN</td></tr><tr><td>BERT BASE</td><td>62.5</td><td>87.5</td><td>75.0</td><td>0.0</td></tr><tr><td colspan=\"2\">BERT LARGE 75.0</td><td>100</td><td>75.0</td><td>0.0</td></tr></table>",
"html": null,
"num": null,
"text": "BERT LARGE top word predictions for selected NEG-136-SIMP sentences.",
"type_str": "table"
}
}
}
}