ACL-OCL / Base_JSON /prefixB /json /blackboxnlp /2020.blackboxnlp-1.25.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:08:26.352635Z"
},
"title": "Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic Generalization",
"authors": [
{
"first": "Tristan",
"middle": [],
"last": "Thrush",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Previous studies investigating the syntactic abilities of deep learning models have not targeted the relationship between the strength of the grammatical generalization and the amount of evidence to which the model is exposed during training. We address this issue by deploying a novel word-learning paradigm to test BERT's (Devlin et al., 2018) few-shot learning capabilities for two aspects of English verbs: alternations and classes of selectional preferences. For the former, we fine-tune BERT on a single frame in a verbal-alternation pair and ask whether the model expects the novel verb to occur in its sister frame. For the latter, we fine-tune BERT on an incomplete selectional network of verbal objects and ask whether it expects unattested but plausible verb/object pairs. We find that BERT makes robust grammatical generalizations after just one or two instances of a novel word in fine-tuning. For the verbal alternation tests, we find that the model displays behavior that is consistent with a transitivity bias: verbs seen few times are expected to take direct objects, but verbs seen with direct objects are not expected to occur intransitively. The code for our experiments is available at https://github.com/TristanThrush/ few-shot-lm-learning.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Previous studies investigating the syntactic abilities of deep learning models have not targeted the relationship between the strength of the grammatical generalization and the amount of evidence to which the model is exposed during training. We address this issue by deploying a novel word-learning paradigm to test BERT's (Devlin et al., 2018) few-shot learning capabilities for two aspects of English verbs: alternations and classes of selectional preferences. For the former, we fine-tune BERT on a single frame in a verbal-alternation pair and ask whether the model expects the novel verb to occur in its sister frame. For the latter, we fine-tune BERT on an incomplete selectional network of verbal objects and ask whether it expects unattested but plausible verb/object pairs. We find that BERT makes robust grammatical generalizations after just one or two instances of a novel word in fine-tuning. For the verbal alternation tests, we find that the model displays behavior that is consistent with a transitivity bias: verbs seen few times are expected to take direct objects, but verbs seen with direct objects are not expected to occur intransitively. The code for our experiments is available at https://github.com/TristanThrush/ few-shot-lm-learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Contemporary deep learning models for language have been shown to learn many aspects of natural language syntax including a number of longdistance dependencies (Gulordava et al., 2018; Marvin and Linzen, 2018; Wilcox et al., 2018) , selectional properties of verbs (Kann et al., 2019) , representations of incremental syntactic state (Futrell et al., 2019) and information from which hierarchical structure can be linearly decoded (Hupkes et al., 2018; Hewitt and Manning, 2019; Lakretz et al., 2019) . These and many other related studies demonstrate an impressive range of human-like linguistic knowledge that is automatically acquired by these models simply from exposure to large quantities of raw text. However, human-like grammatical abilities include not just rich and detailed linguistic knowledge but the ability to deploy this knowledge in using new words based on minimal exposure (Carey and Bartlett, 1978; Gropen et al., 1989; Perek and Goldberg, 2017) . It remains poorly understood what grammatical generalizations contemporary deep learning models are able to make regarding the behavior of words to which they have minimal exposure. In this work, we assess the syntactic generalization behavior of a contemporary neural network model (BERT; Devlin et al. (2018)) on two novel phenomena in English and address the question of single-shot and few-shot learning, demonstrating that BERT makes robust grammatical generalizations after fine-tuning on minimal examples of a novel token.",
"cite_spans": [
{
"start": 160,
"end": 184,
"text": "(Gulordava et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 185,
"end": 209,
"text": "Marvin and Linzen, 2018;",
"ref_id": null
},
{
"start": 210,
"end": 230,
"text": "Wilcox et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 265,
"end": 284,
"text": "(Kann et al., 2019)",
"ref_id": null
},
{
"start": 334,
"end": 356,
"text": "(Futrell et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 431,
"end": 452,
"text": "(Hupkes et al., 2018;",
"ref_id": null
},
{
"start": 453,
"end": 478,
"text": "Hewitt and Manning, 2019;",
"ref_id": null
},
{
"start": 479,
"end": 500,
"text": "Lakretz et al., 2019)",
"ref_id": null
},
{
"start": 892,
"end": 918,
"text": "(Carey and Bartlett, 1978;",
"ref_id": "BIBREF2"
},
{
"start": 919,
"end": 939,
"text": "Gropen et al., 1989;",
"ref_id": "BIBREF8"
},
{
"start": 940,
"end": 965,
"text": "Perek and Goldberg, 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We test BERT's few-shot learning capabilities on two phenomena at the syntax-semantics interface: English verbal alternations, and verb/object selectional preferences. In English, verbs can appear in multiple syntactic frames; which frame a verb appears in is governed by its argument structure properties. Often, frames are paired into alternation classes (Levin, 1993) such that when English speakers hear a novel verb in one frame they can be confident that it can be used in its alternation-class pair. Using the well-attested dative alternation as an example, if a listener hears the sentence \"I daxed the tennis racket to my friend\" they would expect that \"I daxed my friend the tennis racket\" is a grammatical English sentence, meaning approximately the same thing. They would not, however, have such an expectation for \"I daxed my friend for the tennis racket.\" In addition, listeners may be attuned to semantic clustering of verbal arguments based on past experience. For instance, following the example above, English speakers may expect dax to take an animate indirect object, and would find examples such as \"I daxed the court the tennis racket\" to be surprising.",
"cite_spans": [
{
"start": 357,
"end": 370,
"text": "(Levin, 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We take inspiration for our testing regime from a class of psycholinguistic experiments known as 'novel word learning studies', which we adapt to the neural setting. In such experiments subjects are exposed to a novel word in context during a training phase, and assessed for what grammatical generalizations they have learned about the novel word during a later testing phase. Novel word learning experiments have been used to assess human grammatical generalization since Berko (1958) , and have been deployed to assess semantic, as well as syntactic, generalizations (Carey and Bartlett, 1978) . In this work, we replicate the novel word learning paradigm in the neural setting by finetuning BERT on tightly-controlled sentences that contain novel verbs and objects, and assessing the model on carefully constructed test sets that reveal what grammatical generalizations it has learned. We find that BERT is able to make proper generalizations for both verbal alternations as well as semantic clustering for verbal arguments after just one or two exposures during training.",
"cite_spans": [
{
"start": 474,
"end": 486,
"text": "Berko (1958)",
"ref_id": "BIBREF1"
},
{
"start": 570,
"end": 596,
"text": "(Carey and Bartlett, 1978)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For each test, we fine-tune BERT with sentences that contain new tokens for novel words. We then assess the the model's learning outcomes in one of two testing settings, described below. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "We fine-tune BERT with its masked-language modeling objective to predict each of the novel verb tokens in the training data. We add a new output neuron in the language modeling head, and a new embedding, for each novel word. In order for exposure during fine-tuning to approximate the effect of exposure to low-frequency words during the initial training, we optimize only newly-added weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning",
"sec_num": "2.1"
},
{
"text": "During fine-tuning we mask all open-class content words that are not targeted by the experiment, and add determiners if they can be useful at designating the category of a masked word. Sample fine-tuning sentences are given in (1-a) for our alternation tests and (1-b) for our verb selectional preference tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning",
"sec_num": "2.1"
},
{
"text": "(1) a. The [MASK] Masking content words means that the model must rely on purely syntactic information such as wordorder, prepositions and auxiliary verbs for its syntactic generalizations. We also control for tense within our experiments by using the same verbal tense across conditions within a training context.",
"cite_spans": [
{
"start": 11,
"end": 17,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning",
"sec_num": "2.1"
},
{
"text": "Psycholinguistic Generalization Test: Following Linzen et al. 2016and Futrell et al. (2018) , we gauge BERT's learning outcomes by deriving the novel verb's probability in paired contexts in which the novel token's use is consistent with the training data plus grammatical rules (the in-class context) or inconsistent with the data and the rules (the out-class context). If the token is more likely in the in-class context, then the model can be said to have learned the proper syntactic generalization. For these tests we report the proportion of the time the token is more likely in the in-class contexts across 200 randomly-seeded training runs. The probability of a token, [T] , is derived in the standard way from BERT by inserting a [MASK] token in it's place, and taking BERT's contextualized word embedding of this [MASK] token. This embedding is fed into BERT's language modelling head, which returns a probability for the token, [T], given the context.",
"cite_spans": [
{
"start": 70,
"end": 91,
"text": "Futrell et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 677,
"end": 680,
"text": "[T]",
"ref_id": null
},
{
"start": 823,
"end": 829,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2.2"
},
{
"text": "We also test BERT by probing the learned representations of embeddings for novel verb tokens directly (we use this method only for the alternation tests). In this testing procedure we train a linear model to predict whether a pre-trained BERT embedding corresponds to a verb that is in a particular alternation class, for example whether it follows the dative alternation or not. We then use the classifier to predict whether the novel verb is a member of the alternation class. Our linear classifiers achieve a mean accuracy of 0.992 on their training set. For the test set, we also report accuracy scores across 200 model runs. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Classification Test:",
"sec_num": null
},
{
"text": "Verbs can impose a variety of selectional restrictions on semantic properties of nouns to limit which clusters of nouns they accept. Just to name a few, these restrictions can require an object to be animate or inanimate, a location, or a raw material (Levin, 1993) . In this section, we ask what generalizations BERT makes about a verb's selectional restrictions based on incomplete, limited exposure. For our experiments, we define selectional restrictions as a model's expectations for a verb and object to appear together in a simple active transitive sentence, and ask whether BERT can make generalizations about selectional restrictions from indirect evidence, following the incomplete selectional network given in Fig. 1 (a) . Indirect evidence plays an important role in human language learning. The role of indirect negative evidence has been the focus of much debate in discussions of innate human learning biases (Marcus, 1993; Clark and Lappin, 2010) , and indirect evidence has also been shown to play an important role in the learning of novel verbs in both adults and children (Perek and Goldberg, 2017; Yuan and Fisher, 2009; Gropen et al., 1989) To assess BERT's ability to leverage indirect negative evidence for verbal selection classes, we finetune the model on 12 sentences with verb/object pairings that correspond to the solid lines in Figure 1 .\" Each novel verb and each novel noun occur twice in the fine-tuning set, meaning that this test assesses the model's few-shot generalization capabilities. The network of verb-noun relations in the 12 fine-tuning sentences implicitly creates two classes of verbs: verbs within a class can be connected with a path through the solid lines. If the model leverages this incomplete evidence to make class-based generalizations, we predict that novel in-class verb/object pairings (which we indicate with dashed lines in the figure) should be more expected than novel outclass verb-object pairings, despite neither having been directly attested in the fine-tuning data.",
"cite_spans": [
{
"start": 252,
"end": 265,
"text": "(Levin, 1993)",
"ref_id": null
},
{
"start": 924,
"end": 938,
"text": "(Marcus, 1993;",
"ref_id": null
},
{
"start": 939,
"end": 962,
"text": "Clark and Lappin, 2010)",
"ref_id": "BIBREF3"
},
{
"start": 1092,
"end": 1118,
"text": "(Perek and Goldberg, 2017;",
"ref_id": null
},
{
"start": 1119,
"end": 1141,
"text": "Yuan and Fisher, 2009;",
"ref_id": null
},
{
"start": 1142,
"end": 1162,
"text": "Gropen et al., 1989)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 721,
"end": 731,
"text": "Fig. 1 (a)",
"ref_id": "FIGREF0"
},
{
"start": 1359,
"end": 1368,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3"
},
{
"text": "In order to assess the learning outcome of the model, we follow our psycholinguistic generalization test methodology to derive the probabilities of the verbs in simple active transitive sentences across three testing contexts: In the attested inclass condition, we compute the average probability of the verbs in sentences where they are paired with their nouns seen during fine-tuning. This set consisted of 12 sentences, corresponding to the solid lines in Figure 1 (a) . In unattested in-class we compute the probability of the verbs when paired with their unattested, but in-class nouns. This set consisted of 6 sentences, corresponding to the dashed lines in Figure 1 (a) . In the unattested outclass we compute the probability of the verbs when paired with nouns from the other class. This set consisted of 18 sentences, corresponding to verbnoun combinations that are not connected by lines in Figure 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 459,
"end": 471,
"text": "Figure 1 (a)",
"ref_id": "FIGREF0"
},
{
"start": 664,
"end": 676,
"text": "Figure 1 (a)",
"ref_id": "FIGREF0"
},
{
"start": 901,
"end": 909,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3"
},
{
"text": "(a).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3"
},
{
"text": "The results of this experiment can be seen in Figure 1 (b) and (c). Part (c) shows the average surprisal (or negative log probability) of the verbs in the three testing contexts. In (b) we see model 'accuracy', or the proportion of times the model assigns lower surprisal to the higher evidence verb/object pairs. For example, for the attested in-class vs. unattested in-class the y-axis is the proportion of the time the attested in-class verbs are given lower surprisal. Results are averaged across all six novel verbs and the proportions are taken accross 200 random model seeds. Our predictions are as follows: For the accuracy test, if the model is able to pick up patterns in the fine-tuning data, we expect the comparison between seen items and unseen items to be greater than the 50% random baseline. If the model is able to go beyond the patterns in the training data and make class-based generalizations, then we expect the unattested in-class vs. unattested out-class comparison, too, to be higher than the baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 54,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3"
},
{
"text": "Examining verb surprisal on the right, we see significant contrasts between each of the conditions (p<0.001); crucially, the unattested in-class pairings are less surprising (i.e. higher probability) than the unattested out-class pairings, despite the model having seen neither pairing during training. This pattern is confirmed with the accuracy scores, where all three contrasts are significantly higher than the 50% random baseline (p<0.001). These results provide strong evidence that BERT is not only sensitive to the minimal amount of data on which it was fine-tuned, but also able to leverage indirect evidence during fine-tuning to make syntactic generalizations, which drive behavior at test time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3"
},
{
"text": "English is attested to have at least 83 distinct verbal alternation classes, which were analyzed and categorized in meticulous detail in Levin (1993) . In these experiments we consider all verbal alternation classes for which there are two constant frames and for which Levin provides a list of example verbs as well as a list of \"distractor\" verbs-verbs that fit in one frame but not the other-which we require for our embedding classification test paradigm. All of the alternation classes we test come from the first three sections of Levin's 'English Verb Classes and Alternations.' To give a brief flavor of the range of English verbal alternations, we give three examples below.",
"cite_spans": [
{
"start": 137,
"end": 149,
"text": "Levin (1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "Understood Reciprocal Alternation a. The senator will meet the activist. b. The senator and the activist will meet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "(3) Spray/Load Alternation a. The girl will spray the wall with paint. b. The girl will spray paint onto the wall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "(4) Raw Material Subject Alternation a. The girl will make wonderful bread from that flour. b. That flour will make wonderful bread.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "Verbs like meet in Example (2) undergo transitivity alternations, where the verb takes a direct object in one frame but not the other. Verbs like spray in Example (3) involve alternations for transitive verbs that take more than one non-subject argument, and allow for multiple ways of expressing the arguments. Verbs like make in Example (4) involve \"oblique\" subject alternations, where the verb takes one fewer argument in one verbal frame. It is important to note that Levin makes a categorical distinction between these three types of verbal alternation classes and analyzes them each in their own section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "For each of the attested alternations, we create one fine-tuning sentence for each frame using the example frames provided by Levin. We replace the attested verb from the example with a novel verb token and mask content words as discussed in Section 2. 2 We provide tests using both the psycholinguistic generalization and the embedding classification methodology. These are two different ways of probing the generalizations that BERT is able to make, but they result in qualitatively similar results. For our psycholinguistic assessment test, we derive the probability of the novel verb in its alternation-pair frame (this is the in-class context), and the mean probability of the verb across all of the other verbal frames that do not form one of our alternation classes with the training frame (these are the out-class contexts). For our embedding classification test, we train two classifiers for each frame: The first predicts between attested verbs that follow one of the frame's alternations provided by Levin, and a set of out-class distractor verbs that can appear in one of the frames but not the other, also provided by Levin. The second predicts Psycholinguistic Generalization Tests 1-1 1-1 2-3 2-3 2-3 2-3 2-4 2-4 2-4 2-4 2-4 2-4 2-4 2-4 2-5 2-5 2-5 2-5 2-6 2-6 2-7 2-7 1-2 1-2 2-8 2-8 2-9 2-9 2-10 2-10 2-13 2-13 2-13 2-13 2-13 2-13 2-14 2-14 3-8 3-8 3-10 3-10 1-2 1-2 1-2 1-2 1-3 1-3 1-4 1-4 2-1 2-1 2-2 2-2 2-3 2-3 The cup will break.",
"cite_spans": [
{
"start": 253,
"end": 254,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The girls will floss their teeth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The girls will floss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The senator and the activist will meet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The senator will meet the activist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The boy will dress himself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The boy will dress.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The girl will hit at the fence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The girl will hit the fence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The senator and the activist will meet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The senator will meet with the activist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The girl will admire the honestly in them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The girl will admire them for their honesty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The middle class will benefit from the tax cuts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The tax cuts will benefit the middle class. between the attested verbs and an out-class set of the 150 most frequent verbs from the Corpus of Contemporary American English (COCA) (Davies, 2008-) , pruned of auxiliary and modal verbs and verbs that already appear in Levin's lists. For each verbal alternation, we run two classification tests, one for each frame in the alternation.",
"cite_spans": [
{
"start": 179,
"end": 194,
"text": "(Davies, 2008-)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Alternation Classes",
"sec_num": "4"
},
{
"text": "The results from our psycholinguistic assessment test can be seen in Figure 2 . On the top row we show mean accuracy scores across 200 random seeds for all of the alternations tested. On the bottom panel we zoom in on a few key examples, specifically instances where the model performs below the 50% baseline on one of the training frames. Here, we have flipped the axes for readability. For each alternation tested, our charts include two bars, which correspond to the two separate training frames. These training frames are labeled in the bottom figure, with the label corresponding to the type of sentence that we fine-tune the model on. If the model shows high accuracy scores on both bars, it means it has learned the bidirectionality of the alternation. If it shows high accuracy scores in only one training frame, however, it means that it has only learned to generalize from that frame to its sister. Across all our figures, alternations are colored and labeled by the section and first-level subsection of Levin (1993) (e.g. 1-4 means Section 1 Subsection 4, etc.). Error bars are 95% binomial confidence intervals across the 200 random seeds. To see a full-breakdown of all alternations and training frames tested see Appendix C.",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 77,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Psycholinguistic Assessment Results",
"sec_num": "4.1"
},
{
"text": "In terms of top-level performance, BERT performs quite well. Across all alternation classes, the model achieves 82% accuracy, which is significantly higher than the 50% random baseline (p<0.001) and for about half the alternation tests, BERT achieves accuracy scores that are at, or near 100%. Note that the model's performance at these tests generally corresponds to the top-level subsec-Embedding Classification Test 2-5 2-5 2-5 2-5 2-14 2-14 2-13 2-13 2-2 2-2 2-10 2-10 1-1 1-1 2-3 2-3 1-3 1-3 2-1 2-1 2-6 2-6 2-7 2-7 2-4 2-4 2-4 2-4 2-13 2-13 2-13 2-13 3-8 3-8 3-10 3-10 2-3 2-3 2-3 2-3 2-9 2-9 2-4 2-4 2-4 2-4 1-2 1-2 1-2 1-2 1-2 1-2 1-4 1-4 2-8 2-8 2-5 2-5 2-5 2-5 2-14 2-14 2-13 2-13 2-2 2-2 2-10 2-10 1-1 1-1 2-3 2-3 1-3 1-3 2-1 2-1 2-6 2-6 2-7 2-7 2-4 2-4 2-4 2-4 2-13 2-13 2-13 2-13 3-8 3-8 3-10 3-10 2-3 2-3 2-3 2-3 2-9 2-9 2-4 2-4 2-4 2-4 1-2 1-2 1-2 1-2 1-2 1-2 1-4 1- tion of (Levin, 1993), with generally higher scores from Sections 2 and 3, and lower scores from Section 1 (darker blue and purple bars), which correspond to alternations that involve a change in transitivity. Another observation is that when the model does fail, it does so for only one of the two frames. For all cases where the model performs below baseline on one of the training frames, it performs at, or above 75% accuracy on the other frame.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Psycholinguistic Assessment Results",
"sec_num": "4.1"
},
{
"text": "Zooming in on the cases where the model fails to generalize, we see a robust pattern: Almost all cases where model accuracy scores are below 25% are for transitive alternation frames in which the model is being fine-tuned on a single example with a direct object and asked to generalize to cases where the direct object is absent. For example, with the Understood Reflexive Object Alternation BERT was \u223c25% accurate when fine-tuned with the frame of the example \"The boy will [nonce] himself\" but \u223c75% accurate when fine-tuned with the frame of \"The boy will [nonce].\" At a high level, this means that given a single instances of a verb without an object, models expect that it will occur with a direct object, at least more-so than with oblique or prepositional objects (the various out-class frames). However, when given a single instance of a transitive verb, models do not expect it to occur intransitively. The fact that tokens seen only a few times are generally expected to be able to take direct objects suggests a transitivity learning bias in the model. Such a bias would align with recent work assessing few-shot learning of syntactic categories, specifically Jumelet et al. 2019, who hypothesize that models learn default category for number and gender, and Wilcox et al. (2020), who provide data from few-shot learning tests that is consistent with the hypotheses in Jumelet et al. (2019). Interestingly, the results form Wilcox et al. (2020) also suggest that the models tested learn a default transitive category for verbs, although they test Recurrent Neural Network models, not transformers, so more careful cross model comparisons are needed.",
"cite_spans": [
{
"start": 476,
"end": 483,
"text": "[nonce]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Psycholinguistic Assessment Results",
"sec_num": "4.1"
},
{
"text": "The results from our classification assessment test can be seen in Figure 3 . Accuracy scores are on the y-axis and verbal alternation classes are on the xaxis, with the results from the distractor out-class on the top panel and the high-frequency out-class on the bottom panel. Across all verbal alternations and out-class groups tested, BERT achieves an average accuracy of 69%, which is significantly higher than the 50% baseline (p<0.001), and does not perform significantly better or worse on either the distractor or high-frequency out-classes (p=0.6).",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 75,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Classificaiton Assessment Results",
"sec_num": "4.2"
},
{
"text": "As before, the model performs generally worse on alternations from Section 1 of (Levin, 1993), although BERT's performance on the classification assessment test is much more varied than its performance on the psycholinguistic assessment tests. That being said, the scores are correlated (rank performance cor = 0.49, p < 0.001; raw accuracy scores cor = 0.17, p = 0.08).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classificaiton Assessment Results",
"sec_num": "4.2"
},
{
"text": "We used a novel word learning paradigm, inspired by classic studies from psycholinguistics, to assess BERT's syntactic generalization behavior on two novel phenomena: English verb class alternations and verb/object selectional restrictions. In both cases we address the issue of single and few-shot learning by fine-tuning the model on just one or two positive examples, finding that BERT makes some generalizations about a novel token based on minimal experience, and that these generalizations drive robust behavior during test time. This novel word learning paradigm can continue to be explored in later work through the use of large databases such as VerbNet (Schuler, 2005) , which builds on Levin's verb documentations by providing a larger database of verb alternations and sectional restrictions that can be turned into train and test sentences for BERT without hand-crafting.",
"cite_spans": [
{
"start": 663,
"end": 678,
"text": "(Schuler, 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "For verbal/object selectional restrictions, we find that BERT leverages indirect evidence to expect unattested but plausible verb/noun pairings more than unattested but implausible pairings. These results provide evidence for the view that the model is able to attend not just to patterns overtly realized in the data (direct evidence) but also implicit relationships between tokens (indirect evidence). The ability to use indirect evidence, specifically indirect negative evidence, is a hallmark of human language learning, and these results indicate that models are capable of similar behavior in a simple novel word learning paradigm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "For verbal alternations, we find that when finetuned on a single frame, BERT routinely expects the verb to occur in its sister frame with a higher likelihood than in unrelated verbal frames. Interestingly, this behavior is consistently blocked when the model is asked to generalize from a frame that involves an object to a frame where the object is lacking. This behavior is consistent with a general bias towards transitivity in the model, and suggests an exciting direction for further study. Whether such a general bias exists, whether it is restricted to settings with limited evidence, and whether it changes as verbs appear more frequently in the fine-tuning or training data is a question for future research. Another question for future research is whether a multilingual BERT would have the same success on alternation tests in other languages, and if if would exhibit the same biases that we see for English. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Each subsection contains Levin's example of an alternation, followed by training data for BERT that exemplifies the alternation with a novel verb token: [Vn] . A \"distractor\" example from Levin of a verb that does not follow the alternation is also given.",
"cite_spans": [
{
"start": 153,
"end": 157,
"text": "[Vn]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Supplemental Alternation Material",
"sec_num": null
},
{
"text": "Janet broke/forfeited the cup. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Causative/Inchoative",
"sec_num": null
},
{
"text": "The witch turned/compiled him into a frog. The witch turned/*compiled him from a prince into a frog. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.14 Total Transformation (transitive)",
"sec_num": null
},
{
"text": "The middle class will benefit/gain from the new tax laws. The new tax laws will benefit/*gain the middle class. The [MASK] BERT version = bert-large-uncased from https: //github.com/huggingface/transformers, Optimizer = Adam (Kingma and Ba, 2015), learning rate = 1e-3, batch size = full training set size (each training sentence is a separate datum and is enclosed by a start and end token), epochs = 10.",
"cite_spans": [
{
"start": 116,
"end": 122,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.28 Source Subject",
"sec_num": null
},
{
"text": "Architecture = linear layer with an input size the same as that of a BERT embedding and an output size of 2, optimizer = Adam, learning rate = 1e-1, batch size = full training set, epochs = 20, loss = Cross Entropy; trained to label a datum as in-class or out-class with labels of 1 and 0, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 Linear Classifier",
"sec_num": null
},
{
"text": "Full Breakdown 3-8 \u2022 Raw Material Subject Alternation 0 . 0 0 0 . 2 5 0 . 5 0 0 . 7 5 1 . 0 0 Janet will break the cup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The cup will break.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girls will floss their teeth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girls will floss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The senator and the activist will meet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The senator will meet the activist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The boy will dress himself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The boy will dress.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will hit at the fence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will hit the fence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The senator and the activist will meet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The senator will meet with the activist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The woman will sell a ticket to the man.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The woman will sell the man a ticket.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will blame the accident on the dog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will blame the dog for the accident.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will admire the honesty in them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will admire their honesty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will admire the honestly in them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will admire them for their honesty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will praise their dedication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will praise them for their dedication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The president will appoint the woman ambassador.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The president will appoint the woman as ambassador.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will carve a toy for the baby.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will carve the baby a toy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "Henry will clear the dishes from the table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "Henry will clear the table of dishes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will spray paint on the wall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will spray the wall with paint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "Bees will swarm in the garden.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The garden will swarm with bees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "An oak will grow from that acorn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "That acorn will grow into an oak.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "Martha will carve the piece of wood into a toy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "Martha will carve the toy out of a piece of wood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The boy turned into a frog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The boy will turn from a prince into a frog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The witch turned him from a prince into a frog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The witch will turn the boy into a frog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The branch and the twig will break apart.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The twig will break off of the branch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will break the twig and the branch apart.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will break the twig off of the branch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The judge will present a prize to the winner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The judge will present the winner with a prize.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The jeweler will inscribe the name on the ring.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The jeweler will inscribe the ring with the name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will hit the fence with the stick The girl will hit the stick against the fence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will pierce the cloth with a needle",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will pierce the needle through the cloth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The middle class will benefit from the tax cuts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The tax cuts will benefit the middle class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "That flour makes wonderful bread.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "The girl will make bread from that flour. Figure 4 : Psycholinguistic generalization test accuracy to sister frames by verbal alternation, colored by section and subsection from (Levin, 1993) . Error bars show 95% binomial confidence intervals across 200 random seeds; blue dashed line is the random baseline.",
"cite_spans": [
{
"start": 178,
"end": 191,
"text": "(Levin, 1993)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 42,
"end": 50,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Psycholinguistic Generalization Test:",
"sec_num": null
},
{
"text": "For detailed information model architecture and training, see Appendix B. Unless otherwise noted, statistical tests are the result of linear mixed effects models with maximal random effects structure as advocated in(Barr et al., 2013).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Examples of each alternation class and fine-tuning sentences can be found in Appendix A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We gratefully acknowledge support from the MIT-IBM AI Research Lab and a Google Faculty Research Award.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Random effects structure for confirmatory hypothesis testing: Keep it maximal",
"authors": [
{
"first": "J",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Barr",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Harry",
"middle": [
"J"
],
"last": "Scheepers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tily",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "68",
"issue": "",
"pages": "255--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dale J Barr, Roger Levy, Christoph Scheepers, and Harry J Tily. 2013. Random effects structure for con- firmatory hypothesis testing: Keep it maximal. Jour- nal of memory and language, 68(3):255-278.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The child's learning of english morphology",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Berko",
"suffix": ""
}
],
"year": 1958,
"venue": "Word",
"volume": "14",
"issue": "2-3",
"pages": "150--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Berko. 1958. The child's learning of english mor- phology. Word, 14(2-3):150-177.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Acquiring a single new word",
"authors": [
{
"first": "Susan",
"middle": [],
"last": "Carey",
"suffix": ""
},
{
"first": "Elsa",
"middle": [],
"last": "Bartlett",
"suffix": ""
}
],
"year": 1978,
"venue": "Papers and Reports on Child Language Development",
"volume": "15",
"issue": "",
"pages": "17--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan Carey and Elsa Bartlett. 1978. Acquiring a sin- gle new word. Papers and Reports on Child Lan- guage Development, 15:17-29.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Linguistic Nativism and the Poverty of the Stimulus",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Clark and Shalom Lappin. 2010. Linguis- tic Nativism and the Poverty of the Stimulus. John Wiley & Sons.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Corpus of contemporary american english (coca)",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Davies",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Davies. 2008-. Corpus of contemporary ameri- can english (coca).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Morita",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.01329"
]
},
"num": null,
"urls": [],
"raw_text": "Richard Futrell, Ethan Wilcox, Takashi Morita, and Roger Levy. 2018. RNNs as psycholinguistic sub- jects: Syntactic state and grammatical dependency. arXiv preprint arXiv:1809.01329.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Neural language models as psycholinguistic subjects: Representations of syntactic state",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Morita",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic sub- jects: Representations of syntactic state. In Pro- ceedings of the 18th Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Minneapolis.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The learnability and acquisition of the dative alternation in english",
"authors": [
{
"first": "Jess",
"middle": [],
"last": "Gropen",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Pinker",
"suffix": ""
},
{
"first": "Michelle",
"middle": [],
"last": "Hollander",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ronald",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "203--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jess Gropen, Steven Pinker, Michelle Hollander, Richard Goldberg, and Ronald Wilson. 1989. The learnability and acquisition of the dative alternation in english. Language, pages 203-257.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Colorless green recurrent networks dream hierarchically",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Gulordava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.11138"
]
},
"num": null,
"urls": [],
"raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Color- less green recurrent networks dream hierarchically. arXiv preprint arXiv:1803.11138.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "(a): Selectional restrictions imposed on the 6 nonce verbs and 6 nonce nouns in the fine-tuning data. Each verb (rectangled) appears with two nouns (circled), such that the full selectional paradigm for the verb must be inferred. (b) and (c): Results from our selectional preference tests, showing significant difference between all contrasts tested.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "(a). The fine-tuning set (and the test set) consist of simple transitive sentences, following the form \"The [MASK] [Verb1] the [Noun1]",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Psycholinguistic generalization test accuracy to sister frames by verbal alternation, colored by section and subsection from(Levin, 1993). Top figure shows accuracy scores for all alternations tested. Bottom figure shows detailed information for alternations where one frame achieved lower than 50% accuracy. Error bars show 95% binomial confidence intervals across 200 random seeds; blue dashed line is the random baseline.",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "Accuracy scores from our embedding classification test, by out-class verbs on the different panels. Random baseline is the blue dotted line. Bars are colored by section and subsection from(Levin, 1993). Error bars show 95% binomial confidence intervals across 200 random seeds.",
"num": null
},
"TABREF1": {
"num": null,
"content": "<table><tr><td/><td/><td>Ethan Wilcox, Peng Qian, Richard Futrell, Ryosuke</td></tr><tr><td colspan=\"2\">Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema.</td><td>Kohita, Sylvia Yuan and Cynthia Fisher. 2009. \"really? she</td></tr><tr><td colspan=\"2\">2018. Visualisation and'diagnostic classifiers' re-</td><td>blicked the baby?\" two-year-olds learn combinato-</td></tr><tr><td colspan=\"2\">veal how recurrent and recursive neural networks</td><td>rial facts about verbs by listening. Psychological</td></tr><tr><td colspan=\"2\">process hierarchical structure. Journal of Artificial</td><td>science, 20(5):619-626.</td></tr><tr><td>Intelligence Research, 61:907-926.</td><td/></tr><tr><td colspan=\"2\">Jaap Jumelet, Willem Zuidema, and Dieuwke Hupkes.</td></tr><tr><td colspan=\"2\">2019. Analysing neural language models: Con-</td></tr><tr><td colspan=\"2\">textual decomposition reveals default reasoning in</td></tr><tr><td colspan=\"2\">number and gender assignment. arXiv preprint</td></tr><tr><td>arXiv:1909.08975.</td><td/></tr><tr><td colspan=\"2\">Katharina Kann, Alex Warstadt, Adina Williams, and</td></tr><tr><td colspan=\"2\">Samuel R. Bowman. 2019. Verb argument structure</td></tr><tr><td colspan=\"2\">alternations in word and sentence embeddings. In</td></tr><tr><td colspan=\"2\">Proceedings of the Society for Computation in Lin-</td></tr><tr><td>guistics (SCiL), pages 287-297.</td><td/></tr><tr><td colspan=\"2\">D. Kingma and J. Ba. 2015. Adam: A method for</td></tr><tr><td colspan=\"2\">stochastic optimization. In Proceedings of the 3rd</td></tr><tr><td colspan=\"2\">International Conference for Learning Representa-</td></tr><tr><td>tions.</td><td/></tr><tr><td colspan=\"2\">Yair Lakretz, German Kruszewski, Theo Desbordes,</td></tr><tr><td colspan=\"2\">Dieuwke Hupkes, Stanislas Dehaene, and Marco Ba-</td></tr><tr><td colspan=\"2\">roni. 2019. The emergence of number and syn-</td></tr><tr><td colspan=\"2\">tax units in lstm language models. arXiv preprint</td></tr><tr><td>arXiv:1903.07435.</td><td/></tr><tr><td colspan=\"2\">Beth Levin. 1993. English verb classes and alterna-</td></tr><tr><td colspan=\"2\">tions: A preliminary investigation. University of</td></tr><tr><td>Chicago press.</td><td/></tr><tr><td colspan=\"2\">Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg.</td></tr><tr><td colspan=\"2\">2016. Assessing the ability of lstms to learn syntax-</td></tr><tr><td colspan=\"2\">sensitive dependencies. Transactions of the Associa-</td></tr><tr><td colspan=\"2\">tion for Computational Linguistics, 4:521-535.</td></tr><tr><td colspan=\"2\">Gary F Marcus. 1993. Negative evidence in language</td></tr><tr><td>acquisition. Cognition, 46(1):53-85.</td><td/></tr><tr><td colspan=\"2\">Rebecca Marvin and Tal Linzen. 2018. Targeted syn-</td></tr><tr><td colspan=\"2\">tactic evaluation of language models. arXiv preprint</td></tr><tr><td>arXiv:1808.09031.</td><td/></tr><tr><td colspan=\"2\">Florent Perek and Adele E Goldberg. 2017. Linguis-</td></tr><tr><td colspan=\"2\">tic generalization on the basis of function and con-</td></tr><tr><td colspan=\"2\">straints on the basis of statistical preemption. Cog-</td></tr><tr><td>nition, 168:276-293.</td><td/></tr><tr><td colspan=\"2\">Karin Kipper Schuler. 2005. Verbnet: A broad-</td></tr><tr><td>coverage, comprehensive verb lexicon.</td><td/></tr><tr><td colspan=\"2\">Ethan Wilcox, Roger Levy, Takashi Morita, and</td></tr><tr><td colspan=\"2\">Richard Futrell. 2018. What do rnn language mod-</td></tr><tr><td>els learn about filler-gap dependencies?</td><td>arXiv</td></tr><tr><td>preprint arXiv:1809.00042.</td><td/></tr></table>",
"text": "John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138. Riger Levy, and Miguel Ballesteros Ballesteros. 2020. Structural supervision improves fewshot learning and syntactic generalization in neural language models. Proceedings of the Conference on Empirical Methods in Natural Language Processing.",
"type_str": "table",
"html": null
},
"TABREF3": {
"num": null,
"content": "<table><tr><td>[MASK]. The [MASK] will [V21.2] the [MASK]</td><td>The [MASK] will [V14.1] the [MASK] into a</td></tr><tr><td>with a [MASK].</td><td>[MASK]. The [MASK] will [V14.2] the [MASK]</td></tr><tr><td/><td>from a [MASK] into a [MASK].</td></tr><tr><td>A.22 blame</td><td/></tr><tr><td/><td>A.15 Total Transformation (intransitive)</td></tr><tr><td/><td>He turned/grew into a frog. He turned/*grew from</td></tr><tr><td/><td>a prince into a frog.</td></tr><tr><td/><td>The [MASK] will [V15.1] into a [MASK]. The</td></tr><tr><td/><td>[MASK] will [V15.2] from a [MASK] into a</td></tr><tr><td/><td>[MASK].</td></tr><tr><td>A.23 Possessor Object</td><td>A.16 apart Reciprocal (transitive)</td></tr><tr><td>They praised/detected the volunteers' dedication. They praised/*detected the volunteers for their ded-ication. The [MASK] will [V23.1] their [MASK]. The [MASK] will [V23.2] them for their [MASK].</td><td>I broke/disconnected the twig off (of) the branch. I broke/*disconnected the twig and the branch apart. The [MASK] will [V16.1] the [MASK] off of the [MASK]. The [MASK] will [V16.2] the [MASK] and the [MASK] apart.</td></tr><tr><td>A.24 Attribute Object</td><td>A.17 apart Reciprocal (intransitive)</td></tr><tr><td>I admired/praised his honesty. I admired/*praised</td><td>The twig broke/disconnected off (of) the branch.</td></tr><tr><td>the honesty in him.</td><td>The twig and the branch broke/*disconnected apart.</td></tr><tr><td>The [MASK] will [V24.1] their [MASK]. The</td><td>The [MASK] will [V17.1] off of the [MASK]. The</td></tr><tr><td>[MASK] will [V24.2] the [MASK] in them.</td><td>[MASK] and the [MASK] will [V17.2] apart.</td></tr><tr><td>A.25 Possessor and Attribute</td><td>A.18 Fulfilling</td></tr><tr><td>I admired/*detected him for his honesty. I ad-</td><td>The judge presented/offered a prize to the winner.</td></tr><tr><td>mired/detected the honesty in him.</td><td>The judge presented/*offered the winner with a</td></tr><tr><td>The [MASK] will [V25.1] them for their [MASK].</td><td>prize.</td></tr><tr><td>The [MASK] will [V25.2] the [MASK] in them.</td><td>The [MASK] will [V18.1] a [MASK] to the</td></tr><tr><td/><td>[MASK]. The [MASK] will [V18.2] the [MASK]</td></tr><tr><td>A.26 as</td><td>with a [MASK].</td></tr><tr><td>The president appointed/declared Smith press sec-</td><td/></tr><tr><td>retary. The president appointed/*declared Smith as</td><td>A.19 Image Impression</td></tr><tr><td>press secretary. The [MASK] will [V26.1] the [MASK] the [MASK]. The [MASK] will [V26.2] the [MASK] as the [MASK].</td><td>The jeweller inscribed/transcribed the name on the ring. The jeweller inscribed/*transcribed the ring with the name. The [MASK] will [V19.1] the [MASK] on the</td></tr><tr><td>A.27 Raw Material Subject</td><td>[MASK]. The [MASK] will [V19.2] the [MASK]</td></tr><tr><td>She baked/invented wonderful bread from that</td><td>with the [MASK].</td></tr><tr><td>whole wheat flour. That whole wheat flour</td><td>A.20 with/against</td></tr><tr><td>bakes/*invents wonderful bread.</td><td>Brian hit/threw the stick against the fence. Brian</td></tr><tr><td>The [MASK] will [V27.1] the [MASK] from that</td><td>hit/*threw the fence with the stick.</td></tr><tr><td>[MASK]. That [MASK] will [V27.2] the [MASK].</td><td>The [MASK] will [V20.1] the [MASK] against the</td></tr><tr><td/><td>[MASK]. The [MASK] will [V20.2] the [MASK]</td></tr><tr><td/><td>with the [MASK].</td></tr><tr><td/><td>A.21 through/with</td></tr><tr><td/><td>Alison pierced/*hit the needle through the cloth.</td></tr><tr><td/><td>Alison pierced/hit the cloth with a needle.</td></tr><tr><td/><td>The [MASK] will [V21.1] the [MASK] through the</td></tr></table>",
"text": "Mira blamed/*hated the accident on Terry. Mira blamed/hated Terry for the accident. The [MASK] will [V22.1] the [MASK] on the [MASK]. The [MASK] will [V22.2] the [MASK] for the [MASK].",
"type_str": "table",
"html": null
},
"TABREF4": {
"num": null,
"content": "<table><tr><td>B Model Details</td></tr><tr><td>B.1 BERT tuning</td></tr></table>",
"text": "will [V28.1] from the [MASK]. The [MASK] will [V28.2] the [MASK].",
"type_str": "table",
"html": null
}
}
}
}