ACL-OCL / Base_JSON /prefixC /json /conll /2020.conll-1.49.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:29:54.250534Z"
},
"title": "Cloze Distillation: Improving Neural Language Models with Human Next-Word Predictions",
"authors": [
{
"first": "Tiwalayo",
"middle": [
"N"
],
"last": "Eisape",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Noga",
"middle": [],
"last": "Zaslavsky",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Roger",
"middle": [
"P"
],
"last": "Levy",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Contemporary autoregressive language models (LMs) trained purely on corpus data have been shown to capture numerous features of human incremental processing. However, past work has also suggested dissociations between corpus probabilities and human next-word predictions. Here we evaluate several state-of-theart language models for their match to human next-word predictions and to reading time behavior from eye movements. We then propose a novel method for distilling the linguistic information implicit in human linguistic predictions into pre-trained LMs: Cloze Distillation. We apply this method to a baseline neural LM and show potential improvement in reading time prediction and generalization to held-out human cloze data.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Contemporary autoregressive language models (LMs) trained purely on corpus data have been shown to capture numerous features of human incremental processing. However, past work has also suggested dissociations between corpus probabilities and human next-word predictions. Here we evaluate several state-of-theart language models for their match to human next-word predictions and to reading time behavior from eye movements. We then propose a novel method for distilling the linguistic information implicit in human linguistic predictions into pre-trained LMs: Cloze Distillation. We apply this method to a baseline neural LM and show potential improvement in reading time prediction and generalization to held-out human cloze data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Modern language models (LMs) demonstrate outstanding general-purpose command over language. The majority of these models acquire language by maximizing the in-context probability of each word in their training corpus (Figure 1 ), typically with a self-supervised objective. This simple corpus probability matching has resulted in models that learn impressive powers of both psychometric prediction (Frank and Bod, 2011; Fossum and Levy, 2012; Frank et al., 2015; Goodkind and Bicknell, 2018; Hale et al., 2018; van Schijndel and Linzen, 2018; Warstadt and Bowman, 2020; and language more generally (Devlin et al., 2019; Radford et al., 2019) .",
"cite_spans": [
{
"start": 398,
"end": 419,
"text": "(Frank and Bod, 2011;",
"ref_id": "BIBREF14"
},
{
"start": 420,
"end": 442,
"text": "Fossum and Levy, 2012;",
"ref_id": "BIBREF13"
},
{
"start": 443,
"end": 462,
"text": "Frank et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 463,
"end": 491,
"text": "Goodkind and Bicknell, 2018;",
"ref_id": "BIBREF18"
},
{
"start": 492,
"end": 510,
"text": "Hale et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 511,
"end": 542,
"text": "van Schijndel and Linzen, 2018;",
"ref_id": "BIBREF48"
},
{
"start": 543,
"end": 569,
"text": "Warstadt and Bowman, 2020;",
"ref_id": "BIBREF57"
},
{
"start": 598,
"end": 619,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 620,
"end": 641,
"text": "Radford et al., 2019)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 217,
"end": 226,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In humans, prediction may underlie both learning (Kuhl, 2004; Huang and Snedeker, 2013) and processing (Ryskin et al., 2020; Levy, 2008; Clark, 2013) . Human linguistic prediction can be understood as not only lexical but also as taking place both above and below the word level (Federmeier and Kutas, 1999; Federmeier et al., 2002) ; parallel, i.e., predictive commitments are maintained over several linguistic units at once (Levy, 2008) ; and graded, i.e., commitment is licensed to varying degrees based on features of the linguistic unit being predicted. Rather than placing bets (Jackendoff, 1987) on which single word will come next, humans make many diffuse bets at multiple linguistic levels (e.g., syntactic, orthographic, lexical, etc.) .",
"cite_spans": [
{
"start": 49,
"end": 61,
"text": "(Kuhl, 2004;",
"ref_id": "BIBREF31"
},
{
"start": 62,
"end": 87,
"text": "Huang and Snedeker, 2013)",
"ref_id": "BIBREF26"
},
{
"start": 103,
"end": 124,
"text": "(Ryskin et al., 2020;",
"ref_id": "BIBREF45"
},
{
"start": 125,
"end": 136,
"text": "Levy, 2008;",
"ref_id": "BIBREF35"
},
{
"start": 137,
"end": 149,
"text": "Clark, 2013)",
"ref_id": "BIBREF3"
},
{
"start": 295,
"end": 307,
"text": "Kutas, 1999;",
"ref_id": "BIBREF11"
},
{
"start": 308,
"end": 332,
"text": "Federmeier et al., 2002)",
"ref_id": "BIBREF12"
},
{
"start": 427,
"end": 439,
"text": "(Levy, 2008)",
"ref_id": "BIBREF35"
},
{
"start": 585,
"end": 603,
"text": "(Jackendoff, 1987)",
"ref_id": "BIBREF27"
},
{
"start": 701,
"end": 747,
"text": "(e.g., syntactic, orthographic, lexical, etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Surprisal theory (Hale, 2001; Levy, 2008) describes the utility of the approach taken by the human language processor, as lexical prediction is often an ill-constrained classification problem -for agents with very large vocabularies (LMs, humans), context is often not sufficiently constraining for high accuracy multiple, thousand-way classification decisions, but is typically constraining enough to accurately infer next-word features (such as part of speech, and semantic category). A large body of evidence demonstrates that these graded next-word predictions are reflected in human processing times (Ehrlich and Rayner, 1981; Demberg and Keller, 2008; Smith and Levy, 2013; Luke and Christianson, 2016) as well as neural responses (Kutas and Hillyard, 1980; Frank et al., 2015) .",
"cite_spans": [
{
"start": 17,
"end": 29,
"text": "(Hale, 2001;",
"ref_id": "BIBREF19"
},
{
"start": 30,
"end": 41,
"text": "Levy, 2008)",
"ref_id": "BIBREF35"
},
{
"start": 605,
"end": 631,
"text": "(Ehrlich and Rayner, 1981;",
"ref_id": "BIBREF9"
},
{
"start": 632,
"end": 657,
"text": "Demberg and Keller, 2008;",
"ref_id": "BIBREF7"
},
{
"start": 658,
"end": 679,
"text": "Smith and Levy, 2013;",
"ref_id": "BIBREF51"
},
{
"start": 680,
"end": 708,
"text": "Luke and Christianson, 2016)",
"ref_id": "BIBREF38"
},
{
"start": 737,
"end": 763,
"text": "(Kutas and Hillyard, 1980;",
"ref_id": "BIBREF34"
},
{
"start": 764,
"end": 783,
"text": "Frank et al., 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Corpus data are (imperfect) samples from the linguistic environment of a native speaker, and psycholinguistic data indicate that accurate prediction is important to efficient language comprehension. Under the principle of rational analysis (Anderson, 1990) , it is thus to be expected that artificial language models trained on corpus data would correlate with human linguistic predictions and thus have good psychometric predictive accuracy. Nevertheless, past work (Smith and Levy, 2011) has suggested dissociations between corpus probabilities and human next-word estimates. Here, we further investigate this relationship using artificial language models and the most extensive corpus of sequential cloze completions that we are aware of: the Provo Corpus (Provo henceforth; Luke and Christianson, 2018).",
"cite_spans": [
{
"start": 240,
"end": 256,
"text": "(Anderson, 1990)",
"ref_id": "BIBREF0"
},
{
"start": 467,
"end": 489,
"text": "(Smith and Levy, 2011)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First, we use Provo to test the psychometric performance of three state-of-the-art Transformerbased (Vaswani et al., 2017) LMs -XLNet , Transformer-XL , and GPT-2 (Radford et al., 2019) -alongside a smaller 2-layer LSTM (Hochreiter and Schmidhuber, 1997 ) trained on wikitext-103 (Merity et al., 2016) , and a 5-gram LM baseline (Stolcke, 2002) . We find that, while the Transformer models achieve the lowest perplexity on Provo and the best fit to the cloze data, the LSTM model provides the best account of reading times in terms of raw correlation. These findings show a dissociation between recapitulating corpus statistics and mimicking human language processing, operationalized here with reading times. That is, models that minimize perplexity on next-word prediction do not necessarily provide the best account of reading times. Second, based on these findings, we propose Cloze Distillation: a novel method for distilling linguistic information implicit in human cloze completions into pre-trained LMs. We apply this method to the LSTM model and show substantial improvement in reading time prediction and word frequency estimation, in addition to generalization to held-out human cloze data.",
"cite_spans": [
{
"start": 100,
"end": 122,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF56"
},
{
"start": 163,
"end": 185,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF43"
},
{
"start": 220,
"end": 253,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF23"
},
{
"start": 280,
"end": 301,
"text": "(Merity et al., 2016)",
"ref_id": null
},
{
"start": 329,
"end": 344,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The objective for most modern LMs is to compute a probability distribution over the model's vocabulary V for the likely next-word x \u2208 V at position i given the context x <i consisting of the sequence of preceding words in the document. Similarly, as humans process language, they make constant and implicit linguistic predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Cloze Predictions",
"sec_num": "2"
},
{
"text": "One commonly used measure of these predictions in humans is the Cloze task. In its original form (Taylor, 1953) , the task involved masking a word or words in a source text passage and asking participants to provide words for the masked elements that would make the passage \"whole again\", a task structure adopted by contemporary masked language models (Devlin et al., 2019) . In experimental psycholinguistics, however, the most common version of the Cloze task has involved presenting the beginning, or prefix, of a passage and having participants either complete it or provide the word that they think comes next (Figure 1 ), a task more closely matching that of autoregressive language models (Radford et al., 2019) . In this paper, we focus on this latter type of Cloze task, which elicits samples from comprehenders' subjective next-word probability distributions (DeLong et al., 2005; Staub et al., 2015) . For any given prefix, we can estimate the cloze distribution of a typical native speaker from pooled cloze responses across a large number of participants (Luke and Christianson, 2018), similar to how the fundamental output of an autoregressive language model is a vector of next-word probabilities.",
"cite_spans": [
{
"start": 97,
"end": 111,
"text": "(Taylor, 1953)",
"ref_id": "BIBREF55"
},
{
"start": 353,
"end": 374,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 697,
"end": 719,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF43"
},
{
"start": 870,
"end": 891,
"text": "(DeLong et al., 2005;",
"ref_id": "BIBREF6"
},
{
"start": 892,
"end": 911,
"text": "Staub et al., 2015)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [
{
"start": 616,
"end": 625,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Cloze Predictions",
"sec_num": "2"
},
{
"text": "We use the Provo Corpus (Luke and Christianson, 2018) as our source of paired cloze completion and reading time data. The Provo Corpus derives from 55 paragraphs of text taken from sources including online news articles, popular science, and fiction. For each paragraph p, next-word cloze completions were elicited for each prefix x <i for i = 2, . . . |p| (2,689 sentence prefixes total). Prefixes were presented to participants (N = 470) as a continuous multi-line text (Figure 1 ). This resulted in an average of 40 cloze responses with 15 unique continuations per prefix.",
"cite_spans": [],
"ref_spans": [
{
"start": 472,
"end": 481,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Provo Corpus",
"sec_num": "2.1"
},
{
"text": "Additionally, Luke and Christianson (2018) collected eye movement data from eighty-four native speakers of American English as they read these 55 text passages, using a high-resolution SR Research EyeLink 1000 eye tracker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Provo Corpus",
"sec_num": "2.1"
},
{
"text": "The Provo cloze data, eye movement data, and the relationship between them are analyzed in detail in (Luke and Christianson, 2016) . Luke and Christianson (2016) point out that while context is rarely constraining enough to facilitate exact next-word prediction, modal cloze responses often constitute partial matches to the target words. For example, given the prefix With schools still closed, cars still buried and streets still ..., the true continuation, blocked, has a cloze probability of only 0.07. But the overwhelming majority of cloze responses are partial fits to the correct word: 79% of the responses are verbs, and 72% are inflectional matches (ended with -ed), with the two most frequent responses being closed and covered (example from Luke and Christianson, 2018). In addition, they showed that cloze probabilities are highly predictive of reading times, adding to prior work showing a word's reading time is a function of its predictability in context (e.g., Smith and Levy, 2013 Please type the word you think will come next.",
"cite_spans": [
{
"start": 91,
"end": 130,
"text": "detail in (Luke and Christianson, 2016)",
"ref_id": null
},
{
"start": 978,
"end": 998,
"text": "Smith and Levy, 2013",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Provo Corpus",
"sec_num": "2.1"
},
{
"text": "Cloze task",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Provo Corpus",
"sec_num": "2.1"
},
{
"text": "\u2026 LSTM LSTM \u2026 LSTM LSTM \u2026 LSTM LSTM Pre-trained LM Corpus LM Predictions Human Predictions Ground Truth \u03b1 \u2022 D i ( 1 -\u03b1 ) \u2022 S i + L a n g u a g e M o d e li n g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Provo Corpus",
"sec_num": "2.1"
},
{
"text": "C lo z e P r e d ic t io n Figure 1 : Illustration of the Cloze task and the Cloze Distillation objective. Given one of Provo's prefixes -in this example, one that ends in . . . science's best current models, where the true next word (ground truth) is predict -human subjects were prompted, as shown in the Cloze task box, to predict the word they thought was likely to follow. The Cloze Distillation loss is constructed by combining (1) the KL divergence D i between the human cloze distribution and the LM's next-word distribution, and (2) the LM's predicted surprisal S i of the true next word given the prefix.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Provo Corpus",
"sec_num": "2.1"
},
{
"text": "The findings of Luke and Christianson (2016) highlight cloze as a useful test-bed for LMs. Specifically, a LM that employs predictions similar to those that underlie human language processing is expected to be a good model of human cloze responses. Therefore, we evaluate here a suite of LMs on their ability to match human cloze distributions. Additionally, we use the LMs' ability to predict reading times as a second measure of fit to human expectations, extending past work using LMs to predict reading times (Frank and Bod, 2011; .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing Language Models on Provo",
"sec_num": "3"
},
{
"text": "We consider in our analysis the following LMs: We use the LMzoo python package to access the 5-gram model, and the HuggingFace transformers python package (Wolf et al., 2019) for accessing Transformer models (gpt2-large, transfo-xl-wt103, and xlnet-large-cased respectively). These Transformer models use subword tokens (Sennrich et al., 2016) ; we defined word probabilities for these models as the joint probability of the subword tokens comprising the word given the context. Table 1 : Evaluation of LMs on Provo reveals a dissociation between performance on next-word prediction and psychometric measures that reflect human language processing. F intr and F base show the F-test statistics (Section 3.2.2) against various baseline predictors. \u03c1 gaze and \u03c1 freq show correlation with gaze and frequency respectively (Pearson's \u03c1). D i is average KL-divergence between the empirical cloze distribution and the LM's distributions; \u03c4 i is rank correlation between down-sampled model surprisals and surprisal values based on the empirical cloze probabilities; S i is average surprisal over the text in Provo; all standard deviations are computed by paragraph.",
"cite_spans": [
{
"start": 155,
"end": 174,
"text": "(Wolf et al., 2019)",
"ref_id": null
},
{
"start": 320,
"end": 343,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [
{
"start": 479,
"end": 486,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "3.1"
},
{
"text": "Model D i \u03c4 i S i F intr F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.1"
},
{
"text": "We use several metrics to evaluate the fit of our models to human reading times and cloze responses. We discuss and motivate them in the following section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "3.2"
},
{
"text": "We use two measures to evaluate the performance of each model on human cloze data. First, we measure the deviation between the empirically estimated cloze distribution, P cloze (x|x <i ), where x is a potential next-word at position i in a document 1 and the model's next-word distribution, P model (x|x <i ), using the Kullback-Leibler (KL) divergence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cloze Responses",
"sec_num": "3.2.1"
},
{
"text": "D i \u2261 D [P cloze (x|x <i ) P model (x|x <i )]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cloze Responses",
"sec_num": "3.2.1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cloze Responses",
"sec_num": "3.2.1"
},
{
"text": "= x\u2208V P cloze (x|x <i ) log P cloze (x|x <i ) P model (x|x <i ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cloze Responses",
"sec_num": "3.2.1"
},
{
"text": "While the KL divergence is a natural measure for comparing distributions, it is potentially limited for our purposes due to the sparsity of the cloze data. To address this, we also consider Kendall's Tau correlation coefficient, which may be more robust to estimation errors resulting from small sample effects. Specifically, we consider Kendall's Tau correlation between LM surprisals and surprisals estimated form human cloze data, denoted here by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cloze Responses",
"sec_num": "3.2.1"
},
{
"text": "\u03c4 i \u2261 \u03c4 [P cloze (x|x <i ), P model (x|x <i )].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cloze Responses",
"sec_num": "3.2.1"
},
{
"text": "To further evaluate the models' ability to mimic cloze responses and to control for the sparsity of the human cloze data, we simulated a cloze task experiment with our LMs. For each LM, we generated 40 cloze responses 2 per prefix x <i in Provo by sampling from P model (x|x <i ). We repeated this experiment 50 times for each model. The results were similar in both the down-sampling and withoutdown-sampling conditions, and we report only the down-sampling condition in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 472,
"end": 479,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cloze Responses",
"sec_num": "3.2.1"
},
{
"text": "We use gaze duration during first-pass reading as our measure of reading times, which is the amount of time a reader's eyes spend on a word the first time they fixate it (Rayner, 1998 ; if a reader fixates a word to the right before fixating the word in question, the word has been \"skipped\" and there is no valid gaze duration). It is well established that gaze duration captures a wide variety of cognitive processes during real-time language-comprehension, including the relationship between a word and the context in which it appears (Staub, 2011) .",
"cite_spans": [
{
"start": 170,
"end": 183,
"text": "(Rayner, 1998",
"ref_id": "BIBREF44"
},
{
"start": 538,
"end": 551,
"text": "(Staub, 2011)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reading Times",
"sec_num": "3.2.2"
},
{
"text": "We evaluate the ability of a LM to account for human reading times based on their predicted surprisal values,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reading Times",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S i \u2261 \u2212 log 2 P model (x i |x <i ) ,",
"eq_num": "(2)"
}
],
"section": "Reading Times",
"sec_num": "3.2.2"
},
{
"text": "as it has been previously shown to capture several characteristics of human language comprehension and pattern with reading times (Smith and Levy, 2013; . Similarly, we define cloze surprisals by taking the negative log of the empirical cloze probabilities 3 , i.e., \u2212 log 2 P cloze (x i |x <i ). We then measure Pearson's correlation \u03c1 between reading times and surprisal values. In addition, we use ANOVA tests to measure the models' predictive capacities beyond standard baseline predictors of reading time (Howes and Solomon, 1951; Kliegl et al., 2006; Leyland et al., 2013 ) -log word frequency and word length. That is, for each model (either an LM or the cloze distribution), we enter its surprisal values into a linear mixed-effects model (LME) along with the baseline predictors, and measure their contribution by computing the F-test statistic between the full LME and an LME where model surprisals are ablated out. In the case of F base the baseline predictors were frequency, length, and their interaction.",
"cite_spans": [
{
"start": 130,
"end": 152,
"text": "(Smith and Levy, 2013;",
"ref_id": "BIBREF51"
},
{
"start": 510,
"end": 535,
"text": "(Howes and Solomon, 1951;",
"ref_id": "BIBREF24"
},
{
"start": 536,
"end": 556,
"text": "Kliegl et al., 2006;",
"ref_id": "BIBREF30"
},
{
"start": 557,
"end": 577,
"text": "Leyland et al., 2013",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reading Times",
"sec_num": "3.2.2"
},
{
"text": "In the case of F intr the baseline predictors were simply random by-word intercepts. We use both word frequencies estimated from the Corpus of Contemporary American English (COCA; Davies, 2010) and from wikitext-103 (Merity et al., 2016) in our analysis. As the results of our analyses were qualitatively the same in both conditions we report only results from COCA in the analyses to follow.",
"cite_spans": [
{
"start": 180,
"end": 193,
"text": "Davies, 2010)",
"ref_id": "BIBREF5"
},
{
"start": 216,
"end": 237,
"text": "(Merity et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reading Times",
"sec_num": "3.2.2"
},
{
"text": "The main results of evaluating the LMs on Provo are summarized in Table 1 . First, averaging the KL divergence and suprisals values over word positions i in Provo (that is, D i and S i respectively), shows that the ability of LMs to predict human cloze responses tracks with their language modeling performance. This pattern is also reflected in Kendall's \u03c4 correlation between model surprisals and surprisals constructed from the human cloze distribution. At the same time, Table 1 reveals a dissociation between next-word prediction, reflected by S i , and human language processing, as reflected in reading times. Specifically, the LSTM model, which does not perform as well as the Transformer-based LMs in next-word prediction on Provo, as reflected in its higher S i , exhibits superior ability in predicting reading times, as measured in \u03c1 gaze and F intr . This result is similar to that of Merkx and Frank (2020) , who found that Gated Recurrent Unit networks outperformed Transformer models with lower perplexity in predicting gaze duration. We note that when predicting reading times not only from the model's surprisal values, but also using the baseline predictors (word frequency and do not include cloze probabilities of zero (which would yield infinite surprisal). length), the LSTM model no longer outperforms the Transformer-based models (Table 1, F base ). Nonetheless, it is striking that the LSTM model, which is much smaller than the Transformer-based models and was trained on much less data, achieves the best performance in predicting reading times without the baseline predictors.",
"cite_spans": [
{
"start": 898,
"end": 920,
"text": "Merkx and Frank (2020)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 66,
"end": 73,
"text": "Table 1",
"ref_id": null
},
{
"start": 475,
"end": 482,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "Past work shows that human predictions systematically diverge from corpus probabilities (Smith and Levy, 2011) . Our analysis extends these findings by testing current state-of-the-art LMs trained on much larger datasets, and showing that, while better estimates of corpus probabilities may yield better models of human next-word predictions, there does not seem to be a strict positive correlation between the ability to approximate corpus probabilities and the ability to predict human reading times, as evidenced by models with higher S i being on-par and even better at predicting reading times compared to models with lower S i .",
"cite_spans": [
{
"start": 88,
"end": 110,
"text": "(Smith and Levy, 2011)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intermediate Conclusions",
"sec_num": "3.4"
},
{
"text": "Recent studies (Ettinger, 2020; Hao et al., 2020; Jacobs and McCarthy, 2020) have found similar trends when comparing LMs to cloze data. also found only a loose relationship between perplexity (a monotonic function of S i ) and syntactic generalization, adding to a growing body of evidence suggesting that while optimizing for corpus probabilities can create somewhat psycholinguistically-enabled language models (Linzen et al., 2016; Futrell et al., 2019; , there may be a dissociation between corpus probabilities and human expectations.",
"cite_spans": [
{
"start": 15,
"end": 31,
"text": "(Ettinger, 2020;",
"ref_id": "BIBREF10"
},
{
"start": 32,
"end": 49,
"text": "Hao et al., 2020;",
"ref_id": "BIBREF21"
},
{
"start": 50,
"end": 76,
"text": "Jacobs and McCarthy, 2020)",
"ref_id": "BIBREF28"
},
{
"start": 414,
"end": 435,
"text": "(Linzen et al., 2016;",
"ref_id": "BIBREF37"
},
{
"start": 436,
"end": 457,
"text": "Futrell et al., 2019;",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intermediate Conclusions",
"sec_num": "3.4"
},
{
"text": "Here, we show how to leverage these findings to improve the ability of LMs to match human expectations, providing more appealing neural language models for human language processing. To this end, we propose Cloze Distillation: a method for using human next-word predictions as learning targets together with corpus statistics within a knowledge distillation framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cloze Distillation",
"sec_num": "4"
},
{
"text": "Knowledge distillation (Buciluundefined et al., 2006; Ba and Caruana, 2014; Hinton et al., 2015 ) is a technique of imbuing knowledge from a teacher model into a student model by training the student to make the same predictions as the teacher. Typ-ically deployed as a form of model compression, knowledge distillation is useful for those looking to deploy insights from one or more complicated models into a single smaller model. Recently, knowledge distillation has also proven useful to cognitive scientists in creating low-dimensional neural network cognitive models (Schaeffer et al., 2020) . When humans are used as the 'teacher' this can be seen as a specific case of a more general cognitive modeling strategy, task-based modeling.",
"cite_spans": [
{
"start": 23,
"end": 53,
"text": "(Buciluundefined et al., 2006;",
"ref_id": "BIBREF2"
},
{
"start": 54,
"end": 75,
"text": "Ba and Caruana, 2014;",
"ref_id": "BIBREF1"
},
{
"start": 76,
"end": 95,
"text": "Hinton et al., 2015",
"ref_id": "BIBREF22"
},
{
"start": 572,
"end": 596,
"text": "(Schaeffer et al., 2020)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Distillation",
"sec_num": "4.1"
},
{
"text": "Knowledge distillation has proven its usefulness in NLP where researchers have distilled knowledge from very large and/or syntactically aware language models into naive models showing it is possible to transfer even subtle linguistic preferences from teacher to student (Kim and Rush, 2016; Kuncoro et al., 2019; Sanh et al., 2020; Kuncoro et al., 2020) .",
"cite_spans": [
{
"start": 270,
"end": 290,
"text": "(Kim and Rush, 2016;",
"ref_id": "BIBREF29"
},
{
"start": 291,
"end": 312,
"text": "Kuncoro et al., 2019;",
"ref_id": "BIBREF32"
},
{
"start": 313,
"end": 331,
"text": "Sanh et al., 2020;",
"ref_id": "BIBREF46"
},
{
"start": 332,
"end": 353,
"text": "Kuncoro et al., 2020)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Cloze Distillation Objective",
"sec_num": "4.2"
},
{
"text": "We take inspiration from this work and leverage the general framework both as a method for distilling knowledge from a 'teacher' with desirable linguistic biases (humans in our case) and as a tool for cognitive modeling by using empirical cloze distributions P cloze as target distributions in a knowledge distillation framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Cloze Distillation Objective",
"sec_num": "4.2"
},
{
"text": "We follow this approach to arrive at the following loss function for Cloze Distillation (CD):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Cloze Distillation Objective",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L i = \u03b1D i \u2212 (1 \u2212 \u03b1)S i .",
"eq_num": "(3)"
}
],
"section": "The Cloze Distillation Objective",
"sec_num": "4.2"
},
{
"text": "That is, for each context x <i we compute the CD loss by linearly interpolating D i , the KL divergence between the distributions of the human teacher and the student model as defined in equation 1, with an autoregressive language modeling objective that places unit probability mass on the true next-word, formally defined by S i in equation 2. Thus, CD fine-tunes LMs to predict the next word in the document while simultaneously producing a distribution over next-words that mirrors the empirical human cloze distribution for that context. This process is illustrated in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 574,
"end": 582,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Cloze Distillation Objective",
"sec_num": "4.2"
},
{
"text": "To evaluate the utility of the human cloze data, we vary the values of \u03b1 from \u03b1 = 0, which corresponds to pure next-word prediction driven finetuning, to \u03b1 = 1, which corresponds to pure clozeprediction based fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Cloze Distillation Objective",
"sec_num": "4.2"
},
{
"text": "To begin to evaluate the CD paradigm, we apply it to the LSTM from Section 3 by fine-tuning this model using the CD objective over Provo. To test generalization and utilize the full corpus, we use a k-fold cross-validation scheme with k = 55, the number of paragraphs in Provo where humans are provided the full preceding paragraph as context. That is, each fold consists of data from one paragraph in the Provo dataset. We use 100 epochs for training. We provide our LM with the same context as humans, up to the beginning of the current paragraph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cloze-Distilled LSTM",
"sec_num": "4.3"
},
{
"text": "Additionally, we vary \u03b1 to test the utility of our cloze data and cross-validated separately for each value of \u03b1 in the range [0, 1], sampled at intervals of 0.05. This resulted in 1,155 unique models for testing. We wish to emphasize that even utilizing the entire Provo corpus via cross-validation, we are left with only 2685 training samples, which is minuscule with respect to the model's pre-training data (roughly 100 million samples). We refer to the resultant model as cloze-distilled LSTM (CD-LSTM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cloze-Distilled LSTM",
"sec_num": "4.3"
},
{
"text": "After fine-tuning on the CD objective, we note several interesting adaptions in model behavior. These mainly include significant improvement over the standard LSTM baseline in predicting human reading times and cloze distributions ( Figure 2 ). We also discuss improvements in next-word prediction performance over Provo (Figure 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 241,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 321,
"end": 330,
"text": "(Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Psychometric predictive capacity is starkly improved with Cloze Distillation, and the strength of the effect scales with \u03b1. This can be seen in Figure 2 , which shows the statistical comparison of the CD-LSTM for varying levels of \u03b1. We add another model comparison designed to isolate the ability of CD-LSTM to predict reading times above the standard LSTM (Figure 2a ). Specifically, we enter CD-LSTM's surprisals into an LME along with baseline predictors and surprisals from the standard LSTM and compute the F-test statistic against a LME with CD-LSTM surprisal ablated out.",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 358,
"end": 368,
"text": "(Figure 2a",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Reading times",
"sec_num": "4.4.1"
},
{
"text": "CD-LSTM exhibits a significant improvement with \u03b1 in its ability to predict reading times above the non-fine-tuned model (Figure 2a ), as well as improvements over an intercept-only model (Figure 2b) and baseline-only (Figure 2c ). Correlation with reading time and CD-LSTM's surprisal also steadily increases with \u03b1 (Figure 2d ). These findings suggest that, as we postulate, Cloze Distillation is a useful paradigm for extracting the information about human linguistic expectations that is implicit in human cloze predictions and incorporating it into LMs.",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 131,
"text": "(Figure 2a",
"ref_id": "FIGREF2"
},
{
"start": 188,
"end": 199,
"text": "(Figure 2b)",
"ref_id": "FIGREF2"
},
{
"start": 218,
"end": 228,
"text": "(Figure 2c",
"ref_id": "FIGREF2"
},
{
"start": 317,
"end": 327,
"text": "(Figure 2d",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Reading times",
"sec_num": "4.4.1"
},
{
"text": "We report improvements in predicting held out cloze data, where D i is decreased from 3.8 (at \u03b1 = 0) to 3.6 (at \u03b1 = 0.65) (Figure 3) . \u03c4 correlation also exceeds that of the baseline model for several values of \u03b1 (though there does not seem to be a consistent trend across \u03b1-s).",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 132,
"text": "(Figure 3)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Cloze",
"sec_num": "4.4.2"
},
{
"text": "This result is intriguing as it implies that the requisite information for computing cloze distributions is learned over fine-tuning. Furthermore, we see a peak at \u03b1 = 0.65 and not at \u03b1 = 1, which suggests that in training LMs to predict cloze data, some signal from next-word prediction remains vital.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cloze",
"sec_num": "4.4.2"
},
{
"text": "In addition to improved performance on our human language processing benchmarks, we see a robust increase in language modeling performance for most values of \u03b1, as evidenced by average surprisal over Provo (Figure 3) . We note, the standard deviation in S i for our LSTM over Provo was 1.86 bits ( Table 1 ). The improvements we see are less than this deviation, and are thusly below the level of significance, though we do see a consistent trend in \u03b1. This effect is most substantial for intermediate values of \u03b1, suggesting that a combination of human knowledge and next-word prediction improves relative to either one of these factors on its own. This indicates that both parts of the loss function (ground truth next-words, human cloze) provide useful information for predicting text that is not entirely overlapping. This is interesting given the low S i of human cloze data. The fact that humans can contend with large language models trained explicitly on nextword prediction even on subsets of text, together with our Cloze Distillation results suggests there is linguistic information in human cloze that can be harnessed by LMs to subserve general language modeling and is disjoint from the information accessible in corpus probability (Smith and Levy, 2011) .",
"cite_spans": [
{
"start": 1246,
"end": 1268,
"text": "(Smith and Levy, 2011)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 206,
"end": 216,
"text": "(Figure 3)",
"ref_id": "FIGREF3"
},
{
"start": 298,
"end": 305,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language modeling",
"sec_num": "4.4.3"
},
{
"text": "We also note that as \u03b1 increases, the CD-LSTM next-word predictions exhibit increased correlation with frequency (Figure 2e ), suggesting that cloze distilled LMs may learn to better predict frequent words. This is interesting as a proof of concept that Cloze Distillation distills information implicit in cloze into language models as previous work (Smith and Levy, 2011) has shown human cloze is skewed toward more frequent words, relative to corpus probability.",
"cite_spans": [
{
"start": 350,
"end": 372,
"text": "(Smith and Levy, 2011)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 113,
"end": 123,
"text": "(Figure 2e",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Frequency",
"sec_num": "4.4.4"
},
{
"text": "Our analyses provide further evidence of a misalignment between language model estimates and human expectations. The method we provide: Cloze Distillation, demonstrates that shifting training incentives away from corpus probability toward psycholinguistic task-based modeling can result in better cognitive models and better language models. Still, given several of our models predict reading times beyond the cloze data collected in Provo (Table 1) there are several possible explanations for the effect Cloze Distillation has on language model performance. One is that the Cloze task produces data that are a more faithful reflection of the expectations deployed in human reading and are thus able to guide the models toward a fundamentally more human-like set of expectations -despite being under-sampled. If this is true and human subjec-tive next-word estimates also provide signal about next-word probabilities across corpora (reflecting the implicit knowledge speakers have learned about the statistics of their language), this would explain why Cloze Distillation improves next-word prediction accuracy on a new corpus (Provo).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Another possibility is that the models we survey are fundamentally better than the cloze data at capturing the human expectations deployed in reading. Though this would not explain the boost in performance we see in reading time prediction with Cloze Distillation, because several of our models predict reading times better than the cloze data itself, this can not yet be ruled out. We leave the further exploration of this to future work as largerscale collection of human cloze data allows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "That said, the fact that we were able to induce appreciable adaptions in model behavior with such little data highlights the richly orienting information available in even noisy human predictions. Though it is unclear how language users learn to make such sophisticated predictions (we provided this information to our model with direct supervision), our model's ability to learn from such small scale data highlights the potential utility of such predictions in a language acquisition setting -it seems that human predictions are strong enough to significantly bolster the signal in raw linguistic input abetting extensive adaption from relatively little data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "As of now, the current dataset's scale restricts Cloze Distillation to use as a fine-tuning method. Furthermore, we use simple LSTMs to perform a detailed analysis of Cloze Distillation with dense sampling in \u03b1 and thorough cross-validation. It is possible that deploying Cloze Distillation during pre-training in large models (e.g., Transformers) could result in models better able to learn the word features humans demonstrate knowledge of in their cloze responses and we leave the exploration of this to future work as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Methods such as Cloze Distillation provide an avenue forward for psycholinguists interested in taking LMs seriously as candidate models of human language processing and to natural language processing researchers interested in reverse engineering and deploying insights from human sentence processing. Cloze Distillation highlights these goals as potentially mutually-reinforcing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "TNE was supported by the GEM consortium and the MIT Dean of Sciences Fellowship. NZ was supported by an MIT BCS Fellowship in Computation. RPL was supported by NSF grant IIS1815529, a Google Faculty Research Award, and a Newton Brain Science Award. We thank Robert Chen for helping collect model surprisals, as well as Peng Qian and Jon Gauthier for helpful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "6"
},
{
"text": "As participants in Luke and Christianson (2018) were given only within-paragraph context when prompted for each cloze response, each paragraph constitutes a unique document in our analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We generated 40 responses because most prefixes in Provo had at least 40 responses provided by participants.3 We use the cloze probability estimates from Luke and Christianson (2018)'s 'Orthographic Match Model' -a logit mixed-effects model including only random by-word intercepts. These estimates are nearly perfectly correlated with the relative frequency estimate of cloze (\u03c1 = .999), but crucially",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Adaptive Character of Human Thought",
"authors": [
{
"first": "John",
"middle": [
"R"
],
"last": "Anderson",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John R. Anderson. 1990. The Adaptive Character of Human Thought. Hillsdale, NJ: Lawrence Erlbaum.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Do Deep Nets Really Need to be Deep?",
"authors": [
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "2654--2662",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jimmy Ba and Rich Caruana. 2014. Do Deep Nets Re- ally Need to be Deep? In Advances in Neural Infor- mation Processing Systems 27, pages 2654-2662.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Model Compression",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Buciluundefined",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
},
{
"first": "Alexandru",
"middle": [],
"last": "Niculescu-Mizil",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "535--541",
"other_ids": {
"DOI": [
"10.1145/1150402.1150464"
]
},
"num": null,
"urls": [],
"raw_text": "Cristian Buciluundefined, Rich Caruana, and Alexan- dru Niculescu-Mizil. 2006. Model Compression. In Proceedings of the 12th ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining, KDD, page 535-541, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science",
"authors": [
{
"first": "Andy",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2013,
"venue": "Brain and Behavioral Sciences",
"volume": "36",
"issue": "3",
"pages": "181--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andy Clark. 2013. Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Sci- ence. Brain and Behavioral Sciences, 36(3):181- 204.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Transformer-XL: Attentive language models beyond a fixed-length context",
"authors": [
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2978--2988",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1285"
]
},
"num": null,
"urls": [],
"raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car- bonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 2978-2988, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The Corpus of Contemporary American English as the First Reliable Monitor Corpus of English",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Davies",
"suffix": ""
}
],
"year": 2010,
"venue": "Literary and Linguistic Computing",
"volume": "25",
"issue": "4",
"pages": "447--464",
"other_ids": {
"DOI": [
"10.1093/llc/fqq018"
]
},
"num": null,
"urls": [],
"raw_text": "Mark Davies. 2010. The Corpus of Contemporary American English as the First Reliable Monitor Cor- pus of English. Literary and Linguistic Computing, 25(4):447-464.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Probabilistic Word Pre-Activation During Language Comprehension Inferred from Electrical Brain Activity",
"authors": [
{
"first": "A",
"middle": [],
"last": "Katherine",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Delong",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Urbach",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kutas",
"suffix": ""
}
],
"year": 2005,
"venue": "Nature Neuroscience",
"volume": "8",
"issue": "8",
"pages": "1117--1121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katherine A DeLong, Thomas P Urbach, and Marta Kutas. 2005. Probabilistic Word Pre-Activation During Language Comprehension Inferred from Electrical Brain Activity. Nature Neuroscience, 8(8):1117-1121.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Data from Eye-tracking Corpora as Evidence for Theories of Syntactic Processing Complexity",
"authors": [
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2008,
"venue": "Cognition",
"volume": "109",
"issue": "2",
"pages": "193--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vera Demberg and Frank Keller. 2008. Data from Eye-tracking Corpora as Evidence for Theories of Syntactic Processing Complexity. Cognition, 109(2):193-210.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Contextual Effects on Word Perception and Eye Movements During Reading",
"authors": [
{
"first": "Susan",
"middle": [
"F"
],
"last": "Ehrlich",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Rayner",
"suffix": ""
}
],
"year": 1981,
"venue": "Journal of Verbal Learning and Verbal Behavior",
"volume": "20",
"issue": "6",
"pages": "641--655",
"other_ids": {
"DOI": [
"10.1016/S0022-5371(81)90220-6"
]
},
"num": null,
"urls": [],
"raw_text": "Susan F. Ehrlich and Keith Rayner. 1981. Contextual Effects on Word Perception and Eye Movements During Reading. Journal of Verbal Learning and Verbal Behavior, 20(6):641 -655.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "What BERT is not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "34--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allyson Ettinger. 2020. What BERT is not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models. Transactions of the Associa- tion for Computational Linguistics, 8:34-48.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A Rose by Any Other Name: Long-Term Memory Structure and Sentence Processing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kara",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Federmeier",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kutas",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Memory and Language",
"volume": "41",
"issue": "4",
"pages": "469--495",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kara D Federmeier and Marta Kutas. 1999. A Rose by Any Other Name: Long-Term Memory Structure and Sentence Processing. Journal of Memory and Language, 41(4):469-495.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Impact of Semantic Memory Organization and Sentence Context Information on Spoken Language Processing by Younger and Older Adults: an ERP Study",
"authors": [
{
"first": "Devon",
"middle": [
"B"
],
"last": "Kara D Federmeier",
"suffix": ""
},
{
"first": "Esmeralda",
"middle": [
"De"
],
"last": "Mclennan",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Ochoa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kutas",
"suffix": ""
}
],
"year": 2002,
"venue": "Psychophysiology",
"volume": "39",
"issue": "2",
"pages": "133--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kara D Federmeier, Devon B McLennan, Esmeralda De Ochoa, and Marta Kutas. 2002. The Impact of Semantic Memory Organization and Sentence Con- text Information on Spoken Language Processing by Younger and Older Adults: an ERP Study. Psy- chophysiology, 39(2):133-146.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Sequential vs. Hierarchical Syntactic Models of Human Incremental Sentence Processing",
"authors": [
{
"first": "Victoria",
"middle": [],
"last": "Fossum",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 3rd Workshop on Cognitive Modeling and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "61--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victoria Fossum and Roger Levy. 2012. Sequential vs. Hierarchical Syntactic Models of Human Incre- mental Sentence Processing. In Proceedings of the 3rd Workshop on Cognitive Modeling and Com- putational Linguistics (CMCL 2012), pages 61-69, Montr\u00e9al, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Insensitivity of the Human Sentence-processing System to Hierarchical Structure",
"authors": [
{
"first": "L",
"middle": [],
"last": "Stefan",
"suffix": ""
},
{
"first": "Rens",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bod",
"suffix": ""
}
],
"year": 2011,
"venue": "Psychological Science",
"volume": "22",
"issue": "6",
"pages": "829--834",
"other_ids": {
"DOI": [
"10.1177/0956797611409589"
]
},
"num": null,
"urls": [],
"raw_text": "Stefan L Frank and Rens Bod. 2011. Insensitivity of the Human Sentence-processing System to Hierar- chical Structure. Psychological Science, 22(6):829- 834.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The ERP Response to the Amount of Information Conveyed by Words in Sentences",
"authors": [
{
"first": "L",
"middle": [],
"last": "Stefan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Leun",
"suffix": ""
},
{
"first": "Giulia",
"middle": [],
"last": "Otten",
"suffix": ""
},
{
"first": "Gabriella",
"middle": [],
"last": "Galli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vigliocco",
"suffix": ""
}
],
"year": 2015,
"venue": "Brain and Language",
"volume": "140",
"issue": "",
"pages": "1--11",
"other_ids": {
"DOI": [
"10.1016/j.bandl.2014.10.006"
]
},
"num": null,
"urls": [],
"raw_text": "Stefan L Frank, Leun J Otten, Giulia Galli, and Gabriella Vigliocco. 2015. The ERP Response to the Amount of Information Conveyed by Words in Sentences. Brain and Language, 140:1-11.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Morita",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "32--42",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1004"
]
},
"num": null,
"urls": [],
"raw_text": "Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural Language Models as Psycholinguistic Sub- jects: Representations of Syntactic State. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 32-42, Min- neapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "SyntaxGym: An Online Platform for Targeted Evaluation of Language Models",
"authors": [
{
"first": "Jon",
"middle": [],
"last": "Gauthier",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "70--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jon Gauthier, Jennifer Hu, Ethan Wilcox, Peng Qian, and Roger Levy. 2020. SyntaxGym: An Online Plat- form for Targeted Evaluation of Language Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 70-76, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Predictive Power of Word Surprisal for Reading Times is a Linear Function of Language Model Quality",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Goodkind",
"suffix": ""
},
{
"first": "Klinton",
"middle": [],
"last": "Bicknell",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018)",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {
"DOI": [
"10.18653/v1/W18-0102"
]
},
"num": null,
"urls": [],
"raw_text": "Adam Goodkind and Klinton Bicknell. 2018. Predic- tive Power of Word Surprisal for Reading Times is a Linear Function of Language Model Quality. In Proceedings of the 8th Workshop on Cognitive Mod- eling and Computational Linguistics (CMCL 2018), pages 10-18, Salt Lake City, Utah. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A Probabilistic Earley Parser as a Psycholinguistic Model",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hale",
"suffix": ""
}
],
"year": 2001,
"venue": "Second Meeting of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Hale. 2001. A Probabilistic Earley Parser as a Psycholinguistic Model. In Second Meeting of the North American Chapter of the Association for Com- putational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Finding Syntax in Human Encephalography with Beam Search",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Brennan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2727--2736",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1254"
]
},
"num": null,
"urls": [],
"raw_text": "John Hale, Chris Dyer, Adhiguna Kuncoro, and Jonathan Brennan. 2018. Finding Syntax in Human Encephalography with Beam Search. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 2727-2736, Melbourne, Australia. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Probabilistic Predictions of People Perusing: Evaluating Metrics of Language Model Performance for Psycholinguistic Modeling",
"authors": [
{
"first": "Yiding",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Mendelsohn",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Sterneck",
"suffix": ""
},
{
"first": "Randi",
"middle": [],
"last": "Martinez",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.03954"
]
},
"num": null,
"urls": [],
"raw_text": "Yiding Hao, Simon Mendelsohn, Rachel Sterneck, Randi Martinez, and Robert Frank. 2020. Probabilis- tic Predictions of People Perusing: Evaluating Met- rics of Language Model Performance for Psycholin- guistic Modeling. arXiv preprint arXiv:2009.03954.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Distilling the Knowledge in a Keural Network",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "Deep Learning and Representation Learning Workshop at NuerIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the Knowledge in a Keural Net- work. In Deep Learning and Representation Learn- ing Workshop at NuerIPS.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Long Short-Term Memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Visual duration threshold as a function of word-probability",
"authors": [
{
"first": "D",
"middle": [
"H"
],
"last": "Howes",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Solomon",
"suffix": ""
}
],
"year": 1951,
"venue": "Journal of Experimental Psychology",
"volume": "41",
"issue": "6",
"pages": "401--410",
"other_ids": {
"DOI": [
"10.1037/h0056020"
]
},
"num": null,
"urls": [],
"raw_text": "D.H. Howes and R.L. Solomon. 1951. Visual duration threshold as a function of word-probability. Journal of Experimental Psychology, 41(6):401-410.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A Systematic Assessment of Syntactic Generalization in Neural Language Models",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Gauthier",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1725--1744",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020. A Systematic Assessment of Syntactic Generalization in Neural Language Mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1725-1744, Online. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The Use of Lexical and Referential Cues in Children's Online Interpretation of Adjectives",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Ting Huang",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Snedeker",
"suffix": ""
}
],
"year": 2013,
"venue": "Developmental Psychology",
"volume": "49",
"issue": "6",
"pages": "1090--1102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Ting Huang and Jesse Snedeker. 2013. The Use of Lexical and Referential Cues in Children's Online Interpretation of Adjectives. Developmental Psy- chology, 49(6):1090-1102.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Consciousness and the Computational Mind",
"authors": [
{
"first": "Ray",
"middle": [],
"last": "Jackendoff",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "356",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ray Jackendoff. 1987. Consciousness and the Compu- tational Mind, volume 356. The MIT Press, Cam- bridge, MA.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The Human Unlikeness of Neural Language Models in Next-word Prediction",
"authors": [
{
"first": "Cassandra",
"middle": [
"L"
],
"last": "Jacobs",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Arya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the The Fourth Widening Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.winlp-1.29"
]
},
"num": null,
"urls": [],
"raw_text": "Cassandra L. Jacobs and Arya D. McCarthy. 2020. The Human Unlikeness of Neural Language Mod- els in Next-word Prediction. In Proceedings of the The Fourth Widening Natural Language Processing Workshop, page 115, Seattle, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Sequence-Level Knowledge Distillation",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1317--1327",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1139"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim and Alexander M. Rush. 2016. Sequence- Level Knowledge Distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1317-1327, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Tracking the Mind During Reading: The Influence of Past, Present, and Future Words on Fixation Durations",
"authors": [
{
"first": "Reinhold",
"middle": [],
"last": "Kliegl",
"suffix": ""
},
{
"first": "Antje",
"middle": [],
"last": "Nuthmann",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Engbert",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Experimental Psychology",
"volume": "135",
"issue": "",
"pages": "12--35",
"other_ids": {
"DOI": [
"10.1037/0096-3445.135.1.12"
]
},
"num": null,
"urls": [],
"raw_text": "Reinhold Kliegl, Antje Nuthmann, and Ralf Engbert. 2006. Tracking the Mind During Reading: The In- fluence of Past, Present, and Future Words on Fix- ation Durations. Journal of Experimental Psychol- ogy, 135:12-35.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Early Language Acquisition: Cracking the Speech Code",
"authors": [
{
"first": "K",
"middle": [],
"last": "Patricia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kuhl",
"suffix": ""
}
],
"year": 2004,
"venue": "Nature Reviews Neuroscience",
"volume": "5",
"issue": "11",
"pages": "831--843",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patricia K Kuhl. 2004. Early Language Acquisition: Cracking the Speech Code. Nature Reviews Neuro- science, 5(11):831-843.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Scalable Syntax-Aware Language Models Using Knowledge Distillation",
"authors": [
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3472--3484",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1337"
]
},
"num": null,
"urls": [],
"raw_text": "Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, and Phil Blunsom. 2019. Scalable Syntax- Aware Language Models Using Knowledge Distil- lation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3472-3484, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Syntactic Structure Distillation Pretraining for Bidirectional Encoders",
"authors": [
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Fried",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.13482"
]
},
"num": null,
"urls": [],
"raw_text": "Adhiguna Kuncoro, Lingpeng Kong, Daniel Fried, Dani Yogatama, Laura Rimell, Chris Dyer, and Phil Blunsom. 2020. Syntactic Structure Distillation Pre- training for Bidirectional Encoders. arXiv preprint arXiv:2005.13482.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Reading senseless sentences: Brain potentials reflect semantic incongruity",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Kutas",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"A"
],
"last": "Hillyard",
"suffix": ""
}
],
"year": 1980,
"venue": "Science",
"volume": "207",
"issue": "4427",
"pages": "203--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Kutas and Steven A. Hillyard. 1980. Reading senseless sentences: Brain potentials reflect seman- tic incongruity. Science, 207(4427):203-205.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Expectation-based syntactic comprehension",
"authors": [
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2008,
"venue": "Cognition",
"volume": "106",
"issue": "3",
"pages": "1126--1177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger Levy. 2008. Expectation-based syntactic com- prehension. Cognition, 106(3):1126-1177.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The Influence of Word Shading and Word Length on Eye Movements During Reading",
"authors": [
{
"first": "Louise-Ann",
"middle": [],
"last": "Leyland",
"suffix": ""
},
{
"first": "Julie",
"middle": [
"A"
],
"last": "Kirkby",
"suffix": ""
},
{
"first": "Barbara",
"middle": [
"J"
],
"last": "Juhasz",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Pollatsek",
"suffix": ""
},
{
"first": "Simon",
"middle": [
"P"
],
"last": "Liversedge",
"suffix": ""
}
],
"year": 2013,
"venue": "Quarterly Journal of Experimental Psychology",
"volume": "66",
"issue": "3",
"pages": "471--486",
"other_ids": {
"DOI": [
"10.1080/17470218.2011.599401"
],
"PMID": [
"21988376"
]
},
"num": null,
"urls": [],
"raw_text": "Louise-Ann Leyland, Julie A. Kirkby, Barbara J. Juhasz, Alexander Pollatsek, and Simon P. Liv- ersedge. 2013. The Influence of Word Shading and Word Length on Eye Movements During Read- ing. Quarterly Journal of Experimental Psychology, 66(3):471-486. PMID: 21988376.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Assessing the ability of LSTMs to Learn Syntax-Sensitive Dependencies",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "521--535",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00115"
]
},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to Learn Syntax-Sensitive Dependencies. Transactions of the Association for Computational Linguistics, 4:521- 535.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Limits on Lexical Prediction During Reading",
"authors": [
{
"first": "G",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Kiel",
"middle": [],
"last": "Luke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Christianson",
"suffix": ""
}
],
"year": 2016,
"venue": "Cognitive Psychology",
"volume": "88",
"issue": "",
"pages": "22--60",
"other_ids": {
"DOI": [
"10.1016/j.cogpsych.2016.06.002"
]
},
"num": null,
"urls": [],
"raw_text": "Steven G. Luke and Kiel Christianson. 2016. Limits on Lexical Prediction During Reading. Cognitive Psychology, 88:22 -60.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "The Provo Corpus: A Large Eye-Tracking Corpus with Predictability Norms",
"authors": [
{
"first": "G",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Kiel",
"middle": [],
"last": "Luke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Christianson",
"suffix": ""
}
],
"year": 2018,
"venue": "Behavior Research Methods",
"volume": "50",
"issue": "2",
"pages": "826--833",
"other_ids": {
"DOI": [
"10.3758/s13428-017-0908-4"
]
},
"num": null,
"urls": [],
"raw_text": "Steven G. Luke and Kiel Christianson. 2018. The Provo Corpus: A Large Eye-Tracking Corpus with Predictability Norms. Behavior Research Methods, 50(2):826-833.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Comparing Transformers and RNNs on Predicting Human Sentence Processing Data",
"authors": [
{
"first": "Danny",
"middle": [],
"last": "Merkx",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Stefan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.09471"
]
},
"num": null,
"urls": [],
"raw_text": "Danny Merkx and Stefan L Frank. 2020. Com- paring Transformers and RNNs on Predicting Hu- man Sentence Processing Data. arXiv preprint arXiv:2005.09471.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Automatic Differentiation in Py-Torch. Neural Information Processing Systems Autodiff Workshop",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic Differentiation in Py- Torch. Neural Information Processing Systems Au- todiff Workshop.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Language Models are Unsupervised Multitask Learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "Ope-nAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. Ope- nAI Blog, 1(8):9.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Eye Movements in Reading and Information Processing: 20 Years of Research",
"authors": [
{
"first": "Keith",
"middle": [],
"last": "Rayner",
"suffix": ""
}
],
"year": 1998,
"venue": "Psychological Bulletin",
"volume": "124",
"issue": "3",
"pages": "372--422",
"other_ids": {
"DOI": [
"10.1037/0033-2909.124.3.372"
]
},
"num": null,
"urls": [],
"raw_text": "Keith Rayner. 1998. Eye Movements in Reading and Information Processing: 20 Years of Research. Psy- chological Bulletin, 124(3):372-422.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Do Domain-General Executive Resources Play a Role in Linguistic Prediction? Re-evaluation of the Evidence and a Path Forward",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Ryskin",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Roger",
"suffix": ""
},
{
"first": "Evelina",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fedorenko",
"suffix": ""
}
],
"year": 2020,
"venue": "Neuropsychologia",
"volume": "136",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rachel Ryskin, Roger P Levy, and Evelina Fedorenko. 2020. Do Domain-General Executive Resources Play a Role in Linguistic Prediction? Re-evaluation of the Evidence and a Path Forward. Neuropsycholo- gia, 136:107258.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "DistilBert, a Distilled Version of BERT:Smaller, Faster, Cheaper and Lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020. DistilBert, a Distilled Ver- sion of BERT:Smaller, Faster, Cheaper and Lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Reverse-engineering Recurrent Neural Network Solutions to a Hierarchical Inference Task for Mice",
"authors": [
{
"first": "Rylan",
"middle": [],
"last": "Schaeffer",
"suffix": ""
},
{
"first": "Mikail",
"middle": [],
"last": "Khona",
"suffix": ""
},
{
"first": "Leenoy",
"middle": [],
"last": "Meshulam",
"suffix": ""
},
{
"first": "Ila",
"middle": [
"Rani"
],
"last": "Fiete",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1101/2020.06.09.142745"
]
},
"num": null,
"urls": [],
"raw_text": "Rylan Schaeffer, Mikail Khona, Leenoy Meshulam, and Ila Rani Fiete. 2020. Reverse-engineering Re- current Neural Network Solutions to a Hierarchical Inference Task for Mice. bioRxiv.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Modeling Garden Path Effects without Explicit Hierarchical Syntax",
"authors": [
{
"first": "Marten",
"middle": [],
"last": "Van Schijndel",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 40th Annual Meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "2600--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marten van Schijndel and Tal Linzen. 2018. Modeling Garden Path Effects without Explicit Hierarchical Syntax. In Proceedings of the 40th Annual Meeting of the Cognitive Science Society, pages 2600-2605, Austin, Texas. Cognitive Science.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Cloze but no Cigar: The Complex Relationship between Cloze, Corpus, and Subjective Probabilities in Language Processing",
"authors": [
{
"first": "Nathaniel",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Annual Meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathaniel Smith and Roger Levy. 2011. Cloze but no Cigar: The Complex Relationship between Cloze, Corpus, and Subjective Probabilities in Language Processing. Proceedings of the Annual Meeting of the Cognitive Science Society, 33(33).",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "The Effect of Word Predictability on Reading Time is Logarithmic",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nathaniel",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2013,
"venue": "Cognition",
"volume": "128",
"issue": "3",
"pages": "302--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathaniel J Smith and Roger Levy. 2013. The Effect of Word Predictability on Reading Time is Logarith- mic. Cognition, 128(3):302-319.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Word Recognition and Syntactic Attachment in Reading: Evidence for a Staged Architecture",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Staub",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Experimental Psychology. General",
"volume": "140",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adrian Staub. 2011. Word Recognition and Syntac- tic Attachment in Reading: Evidence for a Staged Architecture. Journal of Experimental Psychology. General, 140(3):407.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "The Influence of Cloze Probability and Item Constraint on Cloze Task Response Time",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Staub",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Grant",
"suffix": ""
},
{
"first": "Lori",
"middle": [],
"last": "Astheimer",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Memory and Language",
"volume": "82",
"issue": "",
"pages": "1--17",
"other_ids": {
"DOI": [
"10.1016/j.jml.2015.02.004"
]
},
"num": null,
"urls": [],
"raw_text": "Adrian Staub, Margaret Grant, Lori Astheimer, and An- drew Cohen. 2015. The Influence of Cloze Proba- bility and Item Constraint on Cloze Task Response Time. Journal of Memory and Language, 82:1 -17.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "SRILM -an Extensible Language Modeling Toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Seventh international conference on spoken language processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -an Extensible Lan- guage Modeling Toolkit. In Seventh international conference on spoken language processing.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Cloze Procedure\": A New tool for Measuring Readability",
"authors": [
{
"first": "W",
"middle": [
"L"
],
"last": "Taylor",
"suffix": ""
}
],
"year": 1953,
"venue": "",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. L. Taylor. 1953. \"Cloze Procedure\": A New tool for Measuring Readability. Journalism Quarterly, 30:415.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Attention is All You Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Can Neural Networks Acquire a Structural Bias from Raw Linguistic Data?",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Samuel R Bowman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "1737--1743",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Warstadt and Samuel R Bowman. 2020. Can Neu- ral Networks Acquire a Structural Bias from Raw Linguistic Data? In Proceedings of the 2020 Confer- ence of the Cognitive Science Society, pages 1737- 1743.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "On the Predictive Power of Neural Language Models for Human Real-Time Comprehension Behavior",
"authors": [
{
"first": "Ethan",
"middle": [
"Gotlieb"
],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Gauthier",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Roger",
"middle": [
"P"
],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "1707--1713",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ethan Gotlieb Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, and Roger P. Levy. 2020. On the Predictive Power of Neural Language Models for Human Real- Time Comprehension Behavior. In Proceedings of the 2020 Conference of the Cognitive Science Soci- ety, pages 1707-1713.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Mariama Drame, Quentin Lhoest, and Alexander M Rush. 2019. HuggingFace's Transformers: State-of-the-Art Natural Language Processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gugger",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Fun- towicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Can- wen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M Rush. 2019. HuggingFace's Transformers: State-of-the- Art Natural Language Processing. arXiv e-prints, page arXiv:1910.03771.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in neural in- formation processing systems, pages 5753-5763.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "-gram: N-gram model using a window size of 5 with Kneser-Ney smoothing, obtained via the SRILM language modeling toolkit (Stolcke, 2002). 2. LSTM: A standard 2-layer LSTM RNN implemented in PyTorch (Paszke et al., 2017), used here with 256 hidden units and word embedding size of 256, and trained on the wikitext-103 corpus (Merity et al., 2016) via a next-word prediction task (40 epochs, batch size = 40, learning rate = 20). 3. GPT-2: A Transformer-based LM trained on the WebText corpus (Radford et al., 2019). 4. Transformer-XL (TXL; Dai et al., 2019): A Transformer-based LM with a segment level recurrence mechanism and relative positional embeddings trained on wikitext-103. 5. XLNet (Yang et al., 2019): A Transformerbased LM trained with a permutation language modeling objective as well as a segment level recurrence mechanism and relative positional embeddings. Training data consists of \u223c30 billion tokens across 6 different copora.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Results for CD-LSTM and the LSTM model (without any fine tuning) show that Cloze Distillation yields substantial improvement across several psychometric measures. Panels (a)-(c) show changes in F statistics as a function of \u03b1 for three LME comparisons, and panels (d)-(f) show changes in three correlational measures. Dashed lines in panels (b)-(f) show the performance of the LSTM model. (a) LME based on CD-LSTM's surprisals outperforms the LME based on the LSTM's surprisals for most values of \u03b1 (not significant for \u03b1 < 0.65). (b) LME based on CD-LSTM's surprisals outperforms the null (intercept only) model, and this performance generally improves with \u03b1. (c) LME based on CD-LSTM's surprisals with the baseline factors (word frequency and length) outperforms the baseline-only LME for several values of \u03b1. (d) Pearson's correlation between CD-LSTM's surprisals and reading times. (e) Pearson's correlation between CD-LSTM's surprisals and word frequencies. (f) Kendall's \u03c4 correlation between CD-LSTM's surprisals and human cloze surprisals.",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Average Surprisal (left) and KL divergence (right) over Provo as a function of the distillation interpolation coefficient \u03b1. Dashed lines show LSTM performance before fine-tuning.",
"num": null
},
"TABREF0": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>LSTM</td><td/><td/><td/><td>0.075 0.1</td></tr><tr><td>\u2026</td><td/><td/><td/><td>0.025 0.05</td></tr><tr><td>LSTM</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td>of pr ed ic t su gg es t w ou ld ca n ex pl ai n sh ow ha ve ar e</td></tr><tr><td/><td/><td/><td/><td>1</td></tr><tr><td>science's</td><td>best</td><td>current</td><td>models</td><td>0.5 0.75</td></tr><tr><td/><td/><td/><td/><td>0.25</td></tr><tr><td/><td/><td/><td/><td>of pr ed ic t su gg es t w ou ld ca n ex pl ai n sh ow ha ve ar e</td></tr><tr><td/><td/><td/><td/><td>0.4</td></tr><tr><td/><td/><td/><td/><td>0.3</td></tr><tr><td/><td/><td/><td/><td>0.2</td></tr><tr><td/><td/><td/><td/><td>0.1</td></tr><tr><td/><td/><td/><td/><td>of pr ed ic t su gg es t w ou ld ca n ex pl ai n sh ow ha ve ar e</td></tr></table>",
"num": null,
"text": ")."
},
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td>base \u03c1 gaze</td><td>\u03c1 freq</td></tr><tr><td>Cloze</td><td>N A</td><td>N A</td><td>3.99</td></tr></table>",
"num": null,
"text": "\u00b1 2.60 198.10 30.90 0.36 \u22120.43 GPT-2 2.30 \u00b1 1.57 \u22120.57 \u00b1 0.004 6.11 \u00b1 5.00 252.70 46.11 0.40 \u22120.46 XLNet 2.39 \u00b1 1.68 \u22120.58 \u00b1 0.005 6.39 \u00b1 5.70 260.50 46.08 0.41 \u22120.48 TXL 3.27 \u00b1 1.92 \u22120.47 \u00b1 0.005 8.09 \u00b1 5.50 238.30 30.54 0.39 \u22120.50 LSTM 3.74 \u00b1 1.86 \u22120.39 \u00b1 0.006 8.58 \u00b1 4.90 361.20 41.47 0.47 \u22120.63 5-gram 3.89 \u00b1 1.84 \u22120.20 \u00b1 0.007 12.48 \u00b1 7.00 161.00 16.72 0.31 \u22120.41"
}
}
}
}