ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.20.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:52:49.639647Z"
},
"title": "How Furiously Can Colorless Green Ideas Sleep? Sentence Acceptability in Context",
"authors": [
{
"first": "Jey",
"middle": [
"Han"
],
"last": "Lau",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Carlos",
"middle": [],
"last": "Armendariz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queen Mary University of London",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queen Mary University of London",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Matthew",
"middle": [],
"last": "Purver",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queen Mary University of London",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Chang",
"middle": [],
"last": "Shu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Nottingham Ningbo China",
"location": {
"addrLine": "7 DeepBrain"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We study the influence of context on sentence acceptability. First we compare the acceptability ratings of sentences judged in isolation, with a relevant context, and with an irrelevant context. Our results show that context induces a cognitive load for humans, which compresses the distribution of ratings. Moreover, in relevant contexts we observe a discourse coherence effect that uniformly raises acceptability. Next, we test unidirectional and bidirectional language models in their ability to predict acceptability ratings. The bidirectional models show very promising results, with the best model achieving a new state-of-the-art for unsupervised acceptability prediction. The two sets of experiments provide insights into the cognitive aspects of sentence processing and central issues in the computational modeling of text and discourse.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We study the influence of context on sentence acceptability. First we compare the acceptability ratings of sentences judged in isolation, with a relevant context, and with an irrelevant context. Our results show that context induces a cognitive load for humans, which compresses the distribution of ratings. Moreover, in relevant contexts we observe a discourse coherence effect that uniformly raises acceptability. Next, we test unidirectional and bidirectional language models in their ability to predict acceptability ratings. The bidirectional models show very promising results, with the best model achieving a new state-of-the-art for unsupervised acceptability prediction. The two sets of experiments provide insights into the cognitive aspects of sentence processing and central issues in the computational modeling of text and discourse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sentence acceptability is the extent to which a sentence appears natural to native speakers of a language. Linguists have often used this property to motivate grammatical theories. Computational language processing has traditionally been more concerned with likelihood-the probability of a sentence being produced or encountered. The question of whether and how these properties are related is a fundamental one. Lau et al. (2017b) experiment with unsupervised language models to predict acceptability, and they obtained an encouraging correlation with human ratings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This raises foundational questions about the nature of linguistic knowledge: If probabilistic models can acquire knowledge of sentence acceptability from raw texts, we have prima facie support for an alternative view of language acquisition that does not rely on a categorical grammaticality component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is generally assumed that our perception of sentence acceptability is influenced by context. Sentences that may appear odd in isolation can become natural in some environments, and sentences that seem perfectly well formed in some contexts are odd in others. On the computational side, much recent progress in language modeling has been achieved through the ability to incorporate more document context, using broader and deeper models (e.g., Devlin et al., 2019; Yang et al., 2019) . While most language modeling is restricted to individual sentences, models can benefit from using additional context (Khandelwal et al., 2018) . However, despite the importance of context, few psycholinguistic or computational studies systematically investigate how context affects acceptability, or the ability of language models to predict human acceptability judgments.",
"cite_spans": [
{
"start": 446,
"end": 466,
"text": "Devlin et al., 2019;",
"ref_id": "BIBREF15"
},
{
"start": 467,
"end": 485,
"text": "Yang et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 605,
"end": 630,
"text": "(Khandelwal et al., 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Two recent studies that explore the impact of document context on acceptability judgments both identify a compression effect (Bernardy et al., 2018; Bizzoni and Lappin, 2019) . Sentences perceived to be low in acceptability when judged without context receive a boost in acceptability when judged within context. Conversely, those with high out-of-context acceptability see a reduction in acceptability when context is presented. It is unclear what causes this compression effect. Is it a result of cognitive load, imposed by additional processing demands, or is it the consequence of an attempt to identify a discourse relation between context and sentence?",
"cite_spans": [
{
"start": 125,
"end": 148,
"text": "(Bernardy et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 149,
"end": 174,
"text": "Bizzoni and Lappin, 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We address these questions in this paper. To understand the influence of context on human perceptions, we ran three crowdsourced experiments to collect acceptability ratings from human annotators. We develop a methodology to ensure comparable ratings for each target sentence in isolation (without any context), in a relevant threesentence context, and in the context of sentences randomly sampled from another document. Our results replicate the compression effect, and careful analyses reveal that both cognitive load and discourse coherence are involved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To understand the relationship between sentence acceptability and probability, we conduct experiments with unsupervised language models to predict acceptability. We explore traditional unidirectional (left-to-right) recurrent neural network models, and modern bidirectional transformer models (e.g., BERT) . We found that bidirectional models consistently outperform unidirectional models by a wide margin, calling into question the suitability of left-to-right bias for sentence processing. Our best bidirectional model achieves simulated human performance on the prediction task, establishing a new state-of-the-art.",
"cite_spans": [
{
"start": 300,
"end": 305,
"text": "BERT)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To understand how humans interpret acceptability, we require a set of sentences with varying degrees of well-formedness. Following previous studies (Lau et al., 2017b; Bernardy et al., 2018) , we use round-trip machine translation to introduce a wide range of infelicities into naturally occurring sentences.",
"cite_spans": [
{
"start": 148,
"end": 167,
"text": "(Lau et al., 2017b;",
"ref_id": "BIBREF35"
},
{
"start": 168,
"end": 190,
"text": "Bernardy et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2.1"
},
{
"text": "We sample 50 English (target) sentences and their contexts (three preceding sentences) from the English Wikipedia. 1 We use Moses to translate the target sentences into four languages (Czech, Spanish, German, and French) and then back to 1 We preprocess the raw dump with WikiExtractor (https://github.com/attardi/wikiextractor), and collect paragraphs that have \u2265 4 sentences with each sentence having \u2265 5 words. Sentences and words are tokenized with spaCy (https://spacy.io/) to check for these constraints.",
"cite_spans": [
{
"start": 115,
"end": 116,
"text": "1",
"ref_id": null
},
{
"start": 238,
"end": 239,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2.1"
},
{
"text": "English. 2 This produces 250 sentences in total (5 languages including English) for our test set. Note that we only do round-trip translation for the target sentences; the contexts are not modified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2.1"
},
{
"text": "We use Amazon Mechanical Turk (AMT) to collect acceptability ratings for the target sentences. 3 We run three experiments where we expose users to different types of context. For the experiments, we split the test set into 25 HITs of 10 sentences. Each HIT contains 2 original English sentences and 8 round-trip translated sentences, which are different from each other and not derived from either of the originals. Users are asked to rate the sentences for naturalness on a 4-point ordinal scale: bad (1.0), not very good (2.0), mostly good (3.0), and good (4.0). We recruit 20 annotators for each HIT.",
"cite_spans": [
{
"start": 95,
"end": 96,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2.1"
},
{
"text": "In the first experiment we present only the target sentences, without any context. In the second experiment, we first show the context paragraph (three preceding sentences of the target sentence), and ask users to select the most appropriate description of its topic from a list of four candidate topics. Each candidate topic is represented by three words produced by a topic model. 4 Note that the context paragraph consists of original English sentences which did not undergo translation. Once the users have selected the topic, they move to the next screen where they rate the target sentence for naturalness. 5 The third experiment has the same format as the second, except that the three sentences presented prior to rating are randomly sampled from another Wikipedia article. 6 We require annotators to perform a topic identification task prior to rating the target sentence to ensure that they read the context before making acceptability judgments.",
"cite_spans": [
{
"start": 383,
"end": 384,
"text": "4",
"ref_id": null
},
{
"start": 613,
"end": 614,
"text": "5",
"ref_id": null
},
{
"start": 782,
"end": 783,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2.1"
},
{
"text": "For each sentence, we aggregate the ratings from multiple annotators by taking the mean. Henceforth we refer to the mean ratings collected from the first (no context), second (real context), and third (random context) experiments as H \u2205 , H + , and H \u2212 , respectively. We rolled out the experiments on AMT over several weeks and prevented users from doing more than one experiment. Therefore a disjoint group of annotators performed each experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2.1"
},
{
"text": "To control for quality, we check that users are rating the English sentences \u2265 3.0 consistently. For the second and third experiments, we also check that users are selecting the topics appropriately. In each HIT one context paragraph has one real topic (from the topic model), and three fake topics with randomly sampled words as the candidate topics. Users who fail to identify the real topic above a confidence level are filtered out. Across the three experiments, over three quarters of workers passed our filtering conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2.1"
},
{
"text": "To calibrate for the differences in rating scale between users, we follow the postprocessing procedure of Hill et al. (2015) , where we calculate the average rating for each user and the overall average (by taking the mean of all average ratings), and decrease (increase) the ratings of a user by 1.0 if their average rating is greater (smaller) than the overall average by 1.0. 7 To reduce the impact of outliers, for each sentence we also remove ratings that are more than 2 standard deviations away from the mean. 8",
"cite_spans": [
{
"start": 106,
"end": 124,
"text": "Hill et al. (2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2.1"
},
{
"text": "We present scatter plots to compare the mean ratings for the three different contexts (H \u2205 , H + , and H \u2212 ) in Figure 1 . The black line represents the diagonal, and the red line represents the regression line. In general, the mean ratings correlate strongly with each other. Pearson's r for H + vs. H \u2205 = 0.940, H \u2212 vs. H \u2205 = 0.911, and H \u2212 vs. H + = 0.891.",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 120,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "2.2"
},
{
"text": "The regression (red) and diagonal (black) lines in H + vs. H \u2205 (Figure 1a) show a compression effect. Bad sentences appear a little more natural, and perfectly good sentences become slightly less natural, when context is introduced. 9 This is the same compression effect observed by 7 No worker has an average rating that is greater or smaller than the overall average by 2.0.",
"cite_spans": [
{
"start": 283,
"end": 284,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 63,
"end": 74,
"text": "(Figure 1a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "2.2"
},
{
"text": "8 This postprocessing procedure discarded a total of 504 annotations/ratings (approximately 3.9%) over 3 experiments. The final average number of annotations for a sentence in the first, second, and third experiments is 16.4, 17.8, and 15.3, respectively. 9 On average, good sentences (ratings \u2265 3.5) observe a rating reduction of 0.08 and bad sentences (ratings \u2264 1.5) an increase of 0.45. Bernardy et al. (2018) . It is also present in the graph for H \u2212 vs. H \u2205 (Figure 1b) .",
"cite_spans": [
{
"start": 391,
"end": 413,
"text": "Bernardy et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 464,
"end": 475,
"text": "(Figure 1b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "2.2"
},
{
"text": "Two explanations of the compression effect seem plausible to us. The first is a discourse coherence hypothesis that takes this effect to be caused by a general tendency to find infelicitous sentences more natural in context. This hypothesis, however, does not explain why perfectly natural sentences appear less acceptable in context. The second hypothesis is a variant of a cognitive load account. In this view, interpreting context imposes a significant burden on a subject's processing resources, and this reduces their focus on the sentence presented for acceptability judgments. At the extreme ends of the rating scale, as they require all subjects to be consistent in order to achieve the minimum/maximum mean rating, the increased cognitive load increases the likelihood of a subject making a mistake. This increases/lowers the mean rating, and creates a compression effect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "2.2"
},
{
"text": "The discourse coherence hypothesis would imply that the compression effect should appear with real contexts, but not with random ones, as there is little connection between the target sentence and a random context. By contrast, the cognitive load account predicts that the effect should be present in both types of context, as it depends only on the processing burden imposed by interpreting the context. We see compression in both types of contexts, which suggests that the cognitive load hypothesis is the more likely account.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "2.2"
},
{
"text": "However, these two hypotheses are not mutually exclusive. It is, in principle, possible that both effects-discourse coherence and cognitive load-are exhibited when context is introduced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "2.2"
},
{
"text": "To better understand the impact of discourse coherence, consider Figure 1c , where we compare H \u2212 vs. H + . Here the regression line is parallel to and below the diagonal, implying that there is a consistent decrease in acceptability ratings from H + to H \u2212 . As both ratings are collected with some form of context, the cognitive load confound is removed. What remains is a discourse coherence effect. Sentences presented in relevant contexts undergo a consistent increase in acceptability rating.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 74,
"text": "Figure 1c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "2.2"
},
{
"text": "To analyze the significance of this effect, we use the non-parametric Wilcoxon signed-rank test (one-tailed) to compare the difference between H + and H \u2212 . This gives a p-value of 1.9 \u00d7 10 \u22128 , Figure 1 : Scatter plots comparing human acceptability ratings.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 203,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "2.2"
},
{
"text": "indicating that the discourse coherence effect is significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "2.2"
},
{
"text": "Returning to Figures 1a and 1b, we can see that (1) the offset of the regression line, and (2) the intersection point of the diagonal and the regression line, is higher in Figure 1a than in Figure 1b . This suggests that there is an increase of ratings, and so, in addition to the cognitive load effect, a discourse coherence effect is also at work in the real context setting.",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 181,
"text": "Figure 1a",
"ref_id": null
},
{
"start": 190,
"end": 199,
"text": "Figure 1b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "2.2"
},
{
"text": "We performed hypothesis tests to compare the regression lines in Figures 1a and 1b to see if their offsets (constants) and slopes (coefficients) are statistically different. 10 The p-value for the offset is 1.7 \u00d7 10 \u22122 , confirming our qualitative observation that there is a significant discourse coherence effect. The p-value for the slope, however, is 3.6 \u00d7 10 \u22121 , suggesting that cognitive load compresses the ratings in a consistent way for both H + and H \u2212 , relative to H \u2205 .",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 82,
"text": "Figures 1a and 1b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "2.2"
},
{
"text": "To conclude, our experiments reveal that context induces a cognitive load for human processing, and this has the effect of compressing the acceptability distribution. It moderates the extremes by making very unnatural sentences appear more acceptable, and perfectly natural sentences slightly less acceptable. If the context is relevant to the target sentence, then we also have a discourse coherence effect, where sentences are perceived to be generally more acceptable. 10 We follow the procedure detailed in https:// statisticsbyjim.com/regression/comparingregression-lines/ where we collate the data points in Figures 1a and 1b and treat the in-context ratings (H + and H \u2212 ) as the dependent variable, the out-of-context ratings (H \u2205 ) as the first independent variable, and the type of the context (real or random) as the second independent variable, to perform regression analyses. The significance of the offset and slope can be measured by interpreting the p-values of the second independent variable, and the interaction between the first and second independent variables, respectively.",
"cite_spans": [
{
"start": 472,
"end": 474,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 614,
"end": 631,
"text": "Figures 1a and 1b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "2.2"
},
{
"text": "In this section, we explore computational models to predict human acceptability ratings. We are interested in models that do not rely on explicit supervision (i.e., we do not want to use the acceptability ratings as labels in the training data). Our motivation here is to understand the extent to which sentence probability, estimated by an unsupervised model, can provide the basis for predicting sentence acceptability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Acceptability",
"sec_num": "3"
},
{
"text": "To this end, we train language models (Section 3.1) using unsupervised objectives (e.g., next word prediction), and use these models to infer the probabilities of our test sentences. To accommodate sentence length and lexical frequency we experiment with several simple normalization methods, converting probabilities to acceptability measures (Section 3.2). The acceptability measures are the final output of our models; they are what we use to compare to human acceptability ratings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Acceptability",
"sec_num": "3"
},
{
"text": "Our first model is an LSTM language model (LSTM: Hochreiter and Schmidhuber, 1997; Mikolov et al., 2010) . Recurrent neural network models (RNNs) have been shown to be competitive in this task (Lau et al., 2015; Bernardy et al., 2018) , and they serve as our baseline.",
"cite_spans": [
{
"start": 49,
"end": 82,
"text": "Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF25"
},
{
"start": 83,
"end": 104,
"text": "Mikolov et al., 2010)",
"ref_id": "BIBREF39"
},
{
"start": 193,
"end": 211,
"text": "(Lau et al., 2015;",
"ref_id": "BIBREF33"
},
{
"start": 212,
"end": 234,
"text": "Bernardy et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.1"
},
{
"text": "Our second model is a joint topic and language model (TDLM: Lau et al., 2017a) . TDLM combines topic model with language model in a single model, drawing on the idea that the topical context of a sentence can help word prediction in the language model. The topic model is fashioned as an auto-encoder, where the input is the document's word sequence and it is processed by convolutional layers to produce a topic vector to predict the input words. The language model functions like a standard LSTM model, but it incorporates the topic vector (generated by its document context) into the current hidden state to predict the next word.",
"cite_spans": [
{
"start": 53,
"end": 78,
"text": "(TDLM: Lau et al., 2017a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.1"
},
{
"text": "We train LSTM and TDLM on 100K uncased English Wikipedia articles containing approximately 40M tokens with a vocabulary of 66K words. 11 Next we explore transformer-based models, as they have become the benchmark for many NLP tasks in recent years (Vaswani et al., 2017; Devlin et al., 2019; Yang et al., 2019) . The transformer models that we use are trained on a much larger corpus, and they are four to five times larger with respect to their model parameters.",
"cite_spans": [
{
"start": 134,
"end": 136,
"text": "11",
"ref_id": null
},
{
"start": 248,
"end": 270,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF49"
},
{
"start": 271,
"end": 291,
"text": "Devlin et al., 2019;",
"ref_id": "BIBREF15"
},
{
"start": 292,
"end": 310,
"text": "Yang et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.1"
},
{
"text": "Our first transformer is GPT2 (Radford et al., 2019) . Given a target word, the input is a sequence of previously seen words, which are then mapped to embeddings (along with their positions) and fed to multiple layers of ''transformer blocks'' before the target word is predicted. Much of its power resides in these transformer blocks: Each provides a multi-headed self-attention unit over all input words, allowing it to capture multiple dependencies between words, while avoiding the need for recurrence. With no need to process a sentence in sequence, the model parallelizes more efficiently, and scales in a way that RNNs cannot. GPT2 is trained on WebText, which consists of over 8 million web documents, and uses Byte Pair Encoding (BPE: Sennrich et al., 2016) for tokenization (casing preserved). BPE produces sub-word units, a middle ground between word and character, and it provides better coverage for unseen words. We use the released medium-sized model (''Medium'') for our experiments. 12 Our second transformer is BERT (Devlin et al., 2019) . Unlike GPT2, BERT is not a typical language model, in the sense that it has access to both left and right context words when predicting the target word. 13 Hence, it encodes context in a bidirectional manner.",
"cite_spans": [
{
"start": 30,
"end": 52,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF43"
},
{
"start": 1000,
"end": 1002,
"text": "12",
"ref_id": null
},
{
"start": 1034,
"end": 1055,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 1211,
"end": 1213,
"text": "13",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.1"
},
{
"text": "To train BERT, Devlin et al. (2019) propose a masked language model objective, where a random proportion of input words are masked and the model is tasked to predict them based on non-masked words. In addition to this objective, BERT is trained with a next sentence prediction objective, where the input is a pair of sentences, and the model's goal is to predict whether the latter sentence follows the former. This objective is added to provide pre-training for downstream tasks that involve understanding the relationship between a pair of sentences (e.g., machine comprehension and textual entailment).",
"cite_spans": [
{
"start": 15,
"end": 35,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.1"
},
{
"text": "The bidirectionality of BERT is the core feature that produces its state-of-the-art performance on a number of tasks. The flipside of this encoding style, however, is that BERT lacks the ability to generate left-to-right and compute sentence probability. We discuss how we use BERT to produce a probability estimate for sentences in the next section (Section 3.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.1"
},
{
"text": "In our experiments, we use the largest pretrained model (''BERT-Large''), 14 which has a similar number of parameters (340M) to GPT2. It is trained on Wikipedia and BookCorpus (Zhu et al., 2015) , where the latter is a collection of fiction books. Like GPT2, BERT also uses sub-word tokenization (WordPiece). We experiment with two variants of BERT: one trained on cased data (BERT CS ), and another on uncased data (BERT UCS ). As our test sentences are uncased, a comparison between these two models allows us to gauge the impact of casing in the training data.",
"cite_spans": [
{
"start": 176,
"end": 194,
"text": "(Zhu et al., 2015)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.1"
},
{
"text": "Our last transformer model is XLNET (Yang et al., 2019) . XLNET is unique in that it applies a novel permutation language model objective, allowing it to capture bidirectional context while preserving key aspects of unidirectional language models (e.g., left-to-right generation).",
"cite_spans": [
{
"start": 36,
"end": 55,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.1"
},
{
"text": "The permutation language model objective works by first generating a possible permutation (also called ''factorization order'') of a sequence. When predicting a target word in the sequence, the context words that the model has access to are determined by the factorization order. To illustrate this, imagine we have the sequence x = [x 1 , x 2 , x 3 , x 4 ]. One possible factorization order is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.1"
},
{
"text": "x 3 \u2192 x 2 \u2192 x 4 \u2192 x 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.1"
},
{
"text": "Given this order, if predicting target word x 4 , the model only has access to context words {x 3 , x 2 }; if the target word is x 2 , it sees only {x 3 }. In practice, the target word is set to be the last few words in the factorization order (e.g., x 4 and x 1 ), and so the model always sees some context words for prediction. As XLNET is trained to work with different factorization orders during training, it has experienced both full/bidirectional context and partial/ unidirectional context, allowing it to adapt to tasks that have access to full context (e.g., most language understanding tasks), as well as those that do not (e.g., left-to-right generation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.1"
},
{
"text": "Another innovation of XLNET is that it incorporates the segment recurrence mechanism of Dai et al. (2019) . This mechanism is inspired by truncated backpropagation through time used for training RNNs, where the initial state of a sequence is initialized with the final state from the previous sequence. The segment recurrence mechanism works in a similar way, by caching the hidden states of the transformer blocks from the previous sequence, and allowing the current sequence to attend to them during training. This permits XLNET to model long-range dependencies beyond its maximum sequence length.",
"cite_spans": [
{
"start": 88,
"end": 105,
"text": "Dai et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.1"
},
{
"text": "We use the largest pre-trained model (''XLNet-Large''), 15 which has a similar number of parameters to our BERT and GPT2 models (340M). XLNET is trained on a much larger corpus combining Wikipedia, BookCorpus, news and web articles. For tokenization, XLNET uses SentencePiece (Kudo and Richardson, 2018) , another sub-word tokenization technique. Like GPT2, XLNET is trained on cased data. Table 1 summarizes the language models. In general, the RNN models are orders of magnitude smaller than the transformers in both model parameters and training data, although they are trained on the same domain (Wikipedia), and use uncased data as the test sentences. The RNN models also operate on a word level, whereas the transformers use sub-word units.",
"cite_spans": [
{
"start": 276,
"end": 303,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 390,
"end": 397,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.1"
},
{
"text": "Given a unidirectional language model, we can infer the probability of a sentence by multiplying the estimated probabilities of each token using previously seen (left) words as context (Bengio et al., 2003) :",
"cite_spans": [
{
"start": 185,
"end": 206,
"text": "(Bengio et al., 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2192 P (s) = |s| i=0 P (w i |w <i )",
"eq_num": "(1)"
}
],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "where s is the sentence, and w i a token in s. LSTM, TDLM, and GPT2 are unidirectional models, so they all compute sentence probability as described. XLNET's unique permutational language model objective allows it to compute probability in the same way, and to explicitly mark this we denote it as XLNET UNI when we infer sentence probability using only left context words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "BERT is trained with bidirectional context, and as such it is unable to compute left-to-right sentence probability. 16 We therefore compute sentence probability as follows:",
"cite_spans": [
{
"start": 116,
"end": 118,
"text": "16",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2194 P (s) = |s| i=0 P (w i |w < i , w > i )",
"eq_num": "(2)"
}
],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "With this formulation, we allow BERT to have access to both left and right context words when predicting each target word, since this is consistent with the way in which it was trained. It is important to note, however, that sentence probability computed this way is not a true probability value: These probabilities do not sum to 1.0 over all sentences. Equation (1), in contrast, does guarantee true probabilities. Intuitively, the sentence probability computed with this bidirectional formulation is a measure of the model's confidence in the likelihood of the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "To compute the true probability, Wang and Cho (2019) show that we need to sum the pre-softmax weights for each token to score a sentence, and then divide the score by the total score of all sentences. As it is impractical to compute the total score of all sentences (an infinite set), the true sentence probabilities for these bidirectional models are intractable. We use our non-normalized confidence scores as stand-ins for these probabilities.",
"cite_spans": [
{
"start": 33,
"end": 52,
"text": "Wang and Cho (2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "For XLNET, we also compute sentence probability this way, applying bidirectional context, and we denote it as XLNET BI . Note that XLNET UNI and XLNET BI are based on the same trained model. They differ only in how they estimate sentence probability at test time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "Sentence probability (estimated either using unidirectional or bidirectional context) is affected by its length (e.g., longer sentences have lower probabilities), and word frequency (e.g., the cat is big vs. the yak is big). To modulate for these factors we introduce simple normalization techniques. Table 2 presents five methods to map sentence probabilities to acceptability measures: LP, MeanLP, PenLP, NormLP, and SLOR.",
"cite_spans": [],
"ref_spans": [
{
"start": 301,
"end": 308,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "LP is the unnormalized log probability. Both MeanLP and PenLP are normalized on sentence length, but PenLP scales length with an exponent (\u03b1) to dampen the impact of large values (Wu et al., 2016; Vaswani et al., 2017) . We set \u03b1 = 0.8 in our experiments. NormLP normalizes using unigram sentence probability (i.e., P u (s) = |s| i=0 P (w i )), while SLOR utilizes both length and unigram probability (Pauls and Klein, 2012) .",
"cite_spans": [
{
"start": 179,
"end": 196,
"text": "(Wu et al., 2016;",
"ref_id": "BIBREF53"
},
{
"start": 197,
"end": 218,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF49"
},
{
"start": 401,
"end": 424,
"text": "(Pauls and Klein, 2012)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "When computing sentence probability we have the option of including the context paragraph that the human annotators see (Section 2). We use the superscripts \u2205, +, \u2212 to denote a model using no context, real context, and random context, respectively (e.g., LSTM \u2205 , LSTM + , and LSTM \u2212 ). Note that these variants are created at test time, and are all based on the same trained model (e.g., LSTM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "For all models except TDLM, incorporating the context paragraph is trivial. We simply prepend it to the target sentence before computing the latter's probability. For TDLM + or TDLM \u2212 , the context paragraph is treated as the document context, from which a topic vector is inferred and fed to Acc. Measure Equation Table 2 : Acceptability measures for predicting the acceptability of a sentence; P (s) is the sentence probability, computed using Equation (1) or Equation (2) depending on the model; P u (s) is the sentence probability estimated by a unigram language model; and \u03b1 =0.8.",
"cite_spans": [],
"ref_spans": [
{
"start": 315,
"end": 322,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "LP log P (s) MeanLP log P (s) |s| PenLP log P (s) ((5 + |s|)/(5 + 1)) \u03b1 NormLP \u2212 log P (s) log P u (s) SLOR log P (s) \u2212 log P u (s) |s|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "the language model for next-word prediction. For TDLM \u2205 , we set the topic vector to zeros.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability and Acceptability Measure",
"sec_num": "3.2"
},
{
"text": "For the transformer models (GPT2, BERT, and XLNET), we use the implementation of pytorchtransformers. 17",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3"
},
{
"text": "XLNET requires a long dummy context prepended to the target sentence for it to compute the sentence probability properly. 18 Other researchers have found a similar problem when using XLNET for generation. 19 We think that this is likely due to XLNET's recurrence mechanism (Section 3.1), where it has access to context from the previous sequence during training.",
"cite_spans": [
{
"start": 122,
"end": 124,
"text": "18",
"ref_id": null
},
{
"start": 205,
"end": 207,
"text": "19",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3"
},
{
"text": "For TDLM, we use the implementation provided by Lau et al. (2017a), 20 following their optimal hyper-parameter configuration without tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3"
},
{
"text": "We implement LSTM based on Tensorflow's Penn Treebank language model. 21 In terms of hyper-parameters, we follow the configuration of TDLM where applicable. TDLM uses Adam as the optimizer (Kingma and Ba, 2014), but for LSTM we use Adagrad (Duchi et al., 2011) , as it produces better development perplexity.",
"cite_spans": [
{
"start": 70,
"end": 72,
"text": "21",
"ref_id": null
},
{
"start": 240,
"end": 260,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3"
},
{
"text": "For NormLP and SLOR, we need to compute P u (s), the sentence probability based on a unigram language model. As the language models are trained on different corpora, we collect unigram counts based on their original training corpus. That is, for LSTM and TDLM, we use the 100K English Wikipedia corpus. For GPT2, we use an open source implementation that reproduces the original WebText data. 22 For BERT we use the full Wikipedia collection and crawl smashwords. com to reproduce BookCorpus. 23 Finally, for XLNET we use the combined set of Wikipedia, WebText, and BookCorpus. 24 Source code for our experiments is publicly available at: https://github.com/jhlau/ acceptability-prediction-in-context.",
"cite_spans": [
{
"start": 393,
"end": 395,
"text": "22",
"ref_id": null
},
{
"start": 493,
"end": 495,
"text": "23",
"ref_id": null
},
{
"start": 578,
"end": 580,
"text": "24",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3"
},
{
"text": "We use Pearson's r to assess how well the models' acceptability measures predict mean human acceptability ratings, following previous studies (Lau et al., 2017b; Bernardy et al., 2018) . Recall that for each model (e.g., LSTM), there are three variants with which we infer the sentence probability at test time. These are distinguished by whether we include no context (LSTM \u2205 ), real context (LSTM + ), or random context (LSTM \u2212 ). There are also three types of human acceptability ratings (ground truth), where sentences are judged with no context, (H \u2205 ), real context (H + ), and random context (H \u2212 ). We present the full results in Table 3 .",
"cite_spans": [
{
"start": 142,
"end": 161,
"text": "(Lau et al., 2017b;",
"ref_id": "BIBREF35"
},
{
"start": 162,
"end": 184,
"text": "Bernardy et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 638,
"end": 645,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3.4"
},
{
"text": "To get a sense of what the correlation figures indicate for these models, we compute two human performance estimates to serve as upper bounds on the accuracy of a model. The first upper bound (UB 1 ) is the one-vs-rest annotator correlation, where we select a random annotator's rating and compare it to the mean rating of the rest, using Pearson's r. We repeat this for a large number of trials (1,000) to get a robust estimate of the mean correlation. UB 1 can be interpreted as the average human performance working in isolation. The second upper bound (UB 2 ) is the half-vs.-half annotator correlation. For each sentence we randomly split the annotators into two groups, and compare the mean rating between groups, again using Pearson's r and repeating it (1,000 times) to get a robust estimate. UB 2 can be taken as the average human performance working collaboratively. Overall, the simulated human performance is fairly consistent over context types (Table 3) , for example, UB 1 = 0.75, 0.73, and 0.75 for H \u2205 , H + , and H \u2212 , respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 958,
"end": 967,
"text": "(Table 3)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3.4"
},
{
"text": "When we postprocess the user ratings, remember that we remove the outlier ratings (\u2265 2 standard deviation) for each sentence (Section 2.1). Although this produces a cleaner set of annotations, this filtering step does (artificially) increase the human agreement or upper bound correlations. For completeness we also present upper bound variations where we do not remove the outlier ratings, and denote them as UB \u2205 1 and UB \u2205 2 . In this setup, the one-vs.-rest correlations drop to 0.62-0.66 ( Table 3) . Note that all model performances are reported based on the outlierfiltered ratings, although there are almost no perceivable changes to the performances when they are evaluated on the outlier-preserved ground truth.",
"cite_spans": [],
"ref_spans": [
{
"start": 495,
"end": 503,
"text": "Table 3)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3.4"
},
{
"text": "Looking at Table 3 , the models' performances are fairly consistent over different types of ground truths (H \u2205 , H + , and H \u2212 ). This is perhaps not very surprising, as the correlations among the human ratings for these context types are very high (Section 2).",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3.4"
},
{
"text": "We now focus on the results with H \u2205 as ground truth (''Rtg'' = H \u2205 ). SLOR is generally the best acceptability measure for unidirectional models, with NormLP not far behind (the only exception is GPT2 \u2205 ). The recurrent models (LSTM and TDLM) are very strong compared with the much larger transformer models (GPT2 and XLNET UNI ). In fact TDLM has the best performance when context is not considered (TDLM \u2205 , SLOR = 0.61), suggesting that model architecture may be more important than number of parameters and amount of training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3.4"
},
{
"text": "For bidirectional models, the unnormalized LP works very well. The clear winner here, however, is PenLP. It substantially and consistently outperforms all other acceptability measures. The strong performance of PenLP that we see here illuminates its popularity in machine translation for beam search decoding (Vaswani et al., 2017) . With the exception of PenLP, the gain from normalization for the bidirectional models is small, but we don't think this can be attributed to the size of models or training corpora, as the large unidirectional models (GPT2 and XLNET UNI ) still benefit from normalization. The best model without considering context is BERT \u2205 UCS with a correlation of 0.70 (PenLP), which is very close to the idealized single-annotator performance UB 1 (0.75) and surpasses the unfiltered performance UB \u2205 1 (0.66), creating a new state-of-the-art for unsupervised acceptability prediction (Lau et al., 2015 (Lau et al., , 2017b Bernardy et al., 2018) . There is still room to improve, however, relative to the collaborative UB 2 (0.92) or UB \u2205 2 (0.88) upper bounds. We next look at the impact of incorporating context at test time for the models (e.g., LSTM \u2205 vs. LSTM + or BERT \u2205 UCS vs. BERT + UCS ). To ease interpretability we will focus on SLOR for unidirectional models, and PenLP for bidirectional models. Generally, we see that incorporating context always improves correlation, for both cases where we use H \u2205 and H + as ground truths, suggesting that context is beneficial when it comes to sentence modeling. The only exception is TDLM, where TDLM \u2205 and TDLM + perform very similarly. Note, however, that context is only beneficial when it is relevant. Incorporating random contexts (e.g.,",
"cite_spans": [
{
"start": 309,
"end": 331,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF49"
},
{
"start": 907,
"end": 924,
"text": "(Lau et al., 2015",
"ref_id": "BIBREF33"
},
{
"start": 925,
"end": 945,
"text": "(Lau et al., , 2017b",
"ref_id": "BIBREF35"
},
{
"start": 946,
"end": 968,
"text": "Bernardy et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3.4"
},
{
"text": "\u2205 vs. LSTM \u2212 or BERT \u2205 UCS vs. BERT \u2212 UCS with H \u2212 as ground truth) reduces the performance for all models. 25 Recall that our test sentences are uncased (an artefact of Moses, the machine translation system that we use). Whereas the recurrent models are all trained on uncased data, most of the transformer models are trained with cased data. BERT is the only transformer that is pre-trained on both cased (BERT CS ) and uncased data (BERT UCS ). To understand the impact of casing, we look at the performance of BERT CS and BERT UCS with H \u2205 as ground truth. We see an improvement 25 There is one exception: XLNET \u2205 BI (0.62) vs. XLNET \u2212 BI (0.64). As we saw previously in Section 3.3, XLNET requires a long dummy context to work, and so this observation is perhaps unsurprising, because it appears that context-whether it is relevant or not-seems to always benefit XLNET.",
"cite_spans": [
{
"start": 108,
"end": 110,
"text": "25",
"ref_id": null
},
{
"start": 583,
"end": 585,
"text": "25",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM",
"sec_num": null
},
{
"text": "of 5-7 points (depending on whether context is incorporated), which suggests that casing has a significant impact on performance. Given that XLNET + BI already outperforms BERT + UCS (0.73 vs. 0.72), even though XLNET + BI is trained with cased data, we conjecture that an uncased XLNET is likely to outperform BERT \u2205 UCS when context is not considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM",
"sec_num": null
},
{
"text": "To summarize, our first important result is the exceptional performance of bidirectional models. It raises the question of whether left-to-right bias is an appropriate assumption for predicting sentence acceptability. One could argue that this result may be due to our experimental setup. Users are presented with the sentence in text, and they have the opportunity to read it multiple times, thereby creating an environment that may simulate bidirectional context. We could test this conjecture by changing the presentation of the sentence, displaying it one word at a time (with older words fading off), or playing an audio version (e.g., via a text-to-speech system). However, these changes will likely introduce other confounds (e.g., prosody), but we believe it is an interesting avenue for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM",
"sec_num": null
},
{
"text": "Our second result is more tentative. Our experiments seem to indicate that model architecture is more important than training or model size. We see that TDLM, which is trained on data orders of magnitude smaller and has model parameters four times smaller in size (Table 1) , outperforms the large unidirectional transformer models. To establish this conclusion more firmly we will need to rule out the possibility that the relatively good performance of LSTM and TDLM is not due to a cleaner (e.g., lowercased) or more relevant (e.g., Wikipedia) training corpus. With that said, we contend that our findings motivate the construction of better language models, instead of increasing the number of parameters, or the amount of training data. It would be interesting to examine the effect of extending TDLM with a bidirectional objective.",
"cite_spans": [],
"ref_spans": [
{
"start": 264,
"end": 273,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "LSTM",
"sec_num": null
},
{
"text": "Our final result is that our best model, BERT UCS , attains a human-level performance and achieves a new state-of-the-art performance in the task of unsupervised acceptability prediction. Given this level of accuracy, we expect it would be suitable for tasks like assessing student essays and the quality of machine translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM",
"sec_num": null
},
{
"text": "One may argue that our dataset is potentially biased, as round-trip machine translation may introduce particular types of infelicities or unusual features to the sentences (Graham et al., 2019) . Lau et al. (2017b) addressed this by creating a dataset where they sample 50 grammatical and 50 ungrammatical sentences from Adger (2003) 's syntax textbook, and run a crowdsourced experiment to collect their user ratings. Lau et al. (2017b) found that their unsupervised language models (e.g., simple recurrent networks) predict the acceptability of these sentences with similar performances, providing evidence that their modeling results are robust.",
"cite_spans": [
{
"start": 172,
"end": 193,
"text": "(Graham et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 196,
"end": 214,
"text": "Lau et al. (2017b)",
"ref_id": "BIBREF35"
},
{
"start": 321,
"end": 333,
"text": "Adger (2003)",
"ref_id": "BIBREF1"
},
{
"start": 419,
"end": 437,
"text": "Lau et al. (2017b)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguists' Examples",
"sec_num": "4"
},
{
"text": "We test our pre-trained models using this linguist-constructed dataset, and found similar observations: GPT2, BERT CS , and XLNET BI produce a PenLP correlation of 0.45, 0.53, and 0.58, respectively. These results indicate that these language models are able to predict the acceptability of these sentences reliably, consistent with our modeling results with round-trip translated sentences (Section 3.4). Although the correlations are generally lower, we want to highlight that these linguists' examples are artificially constructed to illustrate specific syntactic phenomena, and so this constitutes a particularly strong case of outof-domain prediction. These texts are substantially different in nature from the natural text that the pre-trained language models are trained on (e.g., the linguists' examples are much shorter-less than 7 words on average-than the natural texts).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguists' Examples",
"sec_num": "4"
},
{
"text": "Acceptability is closely related to the concept of grammaticality. The latter is a theoretical construction corresponding to syntactic wellformedness, and it is typically interpreted as a binary property (i.e., a sentence is either grammatical or ungrammatical). Acceptability, on the other hand, includes syntactic, semantic, pragmatic, and non-linguistic factors, such as sentence length. It is gradient, rather than binary, in nature (Denison, 2004; Sorace and Keller, 2005; Sprouse, 2007) .",
"cite_spans": [
{
"start": 437,
"end": 452,
"text": "(Denison, 2004;",
"ref_id": "BIBREF14"
},
{
"start": 453,
"end": 477,
"text": "Sorace and Keller, 2005;",
"ref_id": "BIBREF46"
},
{
"start": 478,
"end": 492,
"text": "Sprouse, 2007)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Linguists and other theorists of language have traditionally assumed that context affects our perception of both grammaticality (Bolinger, 1968) and acceptability (Bever, 1970) , but surprisingly little work investigates this effect systematically, or on a large scale. Most formal linguists rely heavily on the analysis of sentences taken in isolation. However, many linguistic frameworks seek to incorporate aspects of context-dependence. Dynamic theories of semantics (Heim, 1982; Kamp and Reyle, 1993; Groenendijk and Stokhof, 1990) attempt to capture intersentential coreference, binding, and scope phenomena. Dynamic Syntax (Cann et al., 2007) uses incremental tree construction and semantic type projection to render parsing and interpretation discourse dependent. Theories of discourse structure characterize sentence coherence in context through rhetorical relations (Mann and Thompson, 1988; Asher and Lascarides, 2003) , or by identifying open questions and common ground (Ginzburg, 2012) . While these studies offer valuable insights into a variety of context related linguistic phenomena, much of it takes grammaticality and acceptability to be binary properties. Moreover, it is not formulated in a way that permits fine-grained psychological experiments, or wide coverage computational modeling.",
"cite_spans": [
{
"start": 128,
"end": 144,
"text": "(Bolinger, 1968)",
"ref_id": "BIBREF9"
},
{
"start": 163,
"end": 176,
"text": "(Bever, 1970)",
"ref_id": "BIBREF6"
},
{
"start": 471,
"end": 483,
"text": "(Heim, 1982;",
"ref_id": "BIBREF23"
},
{
"start": 484,
"end": 505,
"text": "Kamp and Reyle, 1993;",
"ref_id": "BIBREF27"
},
{
"start": 506,
"end": 536,
"text": "Groenendijk and Stokhof, 1990)",
"ref_id": "BIBREF22"
},
{
"start": 630,
"end": 649,
"text": "(Cann et al., 2007)",
"ref_id": "BIBREF10"
},
{
"start": 876,
"end": 901,
"text": "(Mann and Thompson, 1988;",
"ref_id": "BIBREF36"
},
{
"start": 902,
"end": 929,
"text": "Asher and Lascarides, 2003)",
"ref_id": "BIBREF3"
},
{
"start": 983,
"end": 999,
"text": "(Ginzburg, 2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Psycholinguistic work can provide more experimentally grounded approaches. Greenbaum (1976) found that combinations of particular syntactic constructions in context affect human judgments of acceptability, although the small scale of the experiments makes it difficult to draw general conclusions. More recent work investigates related effects, but it tends to focus on very restricted aspects of the phenomenon. For example, Zlogar and Davidson (2018) investigate the influence of context on the acceptability of gestures with speech, focussing on interaction with semantic content and presupposition. The priming literature shows that exposure to lexical and syntactic items leads to higher likelihood of their repetition in production (Reitter et al., 2011) , and to quicker processing in parsing under certain circumstances (Giavazzi et al., 2018) . Frameworks such as ACT-R (Anderson, 1996) explain these effects through the impact of cognitive activation on subsequent processing. Most of these studies suggest that coherent or natural contexts should increase acceptability ratings, given that the linguistic expressions used in processing become more activated. Warner and Glass (1987) show that such syntactic contexts can indeed affect grammaticality judgments in the expected way for garden path sentences. Cowart (1994) uses comparison between positive and negative contexts, investigating the effect of contexts containing alternative more or less acceptable sentences. But he restricts the test cases to specific pronoun binding phenomena. None of the psycholinguistic work investigates acceptability judgments in real textual contexts, over large numbers of test cases and human subjects.",
"cite_spans": [
{
"start": 75,
"end": 91,
"text": "Greenbaum (1976)",
"ref_id": "BIBREF21"
},
{
"start": 426,
"end": 452,
"text": "Zlogar and Davidson (2018)",
"ref_id": "BIBREF57"
},
{
"start": 738,
"end": 760,
"text": "(Reitter et al., 2011)",
"ref_id": "BIBREF44"
},
{
"start": 828,
"end": 851,
"text": "(Giavazzi et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 873,
"end": 895,
"text": "ACT-R (Anderson, 1996)",
"ref_id": null
},
{
"start": 1170,
"end": 1193,
"text": "Warner and Glass (1987)",
"ref_id": "BIBREF51"
},
{
"start": 1318,
"end": 1331,
"text": "Cowart (1994)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Some recent computational work explores the relation of acceptability judgments to sentence probabilities. Lau et al. (2015 Lau et al. ( , 2017b show that the output of unsupervised language models can correlate with human acceptability ratings. Warstadt et al. (2018) treat this as a semisupervised problem, training a binary classifier on top of a pre-trained sentence encoder to predict acceptability ratings with greater accuracy. Bernardy et al. (2018) explore incorporating context into such models, eliciting human judgments of sentence acceptability when the sentences were presented both in isolation and within a document context. They find a compression effect in the distribution of the human acceptability ratings. Bizzoni and Lappin (2019) observe a similar effect in a paraphrase acceptability task.",
"cite_spans": [
{
"start": 107,
"end": 123,
"text": "Lau et al. (2015",
"ref_id": "BIBREF33"
},
{
"start": 124,
"end": 144,
"text": "Lau et al. ( , 2017b",
"ref_id": "BIBREF35"
},
{
"start": 246,
"end": 268,
"text": "Warstadt et al. (2018)",
"ref_id": null
},
{
"start": 435,
"end": 457,
"text": "Bernardy et al. (2018)",
"ref_id": "BIBREF5"
},
{
"start": 728,
"end": 753,
"text": "Bizzoni and Lappin (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "One possible explanation for this compression effect is to take it as the expression of cognitive load. Psychological research on the cognitive load effect (Sweller, 1988; Ito et al., 2018; Causse et al., 2016; Park et al., 2013) indicates that performing a secondary task can degrade or distort subjects' performance on a primary task. This could cause judgments to regress towards the mean. However, the experiments of Bernardy et al. (2018) and Bizzoni and Lappin (2019) do not allow us to distinguish this possibility from a coherence or priming effect, as only coherent contexts were considered. Our experimental setup improves on this by introducing a topic identification task and incoherent (random) contexts in order to tease the effects apart.",
"cite_spans": [
{
"start": 156,
"end": 171,
"text": "(Sweller, 1988;",
"ref_id": "BIBREF48"
},
{
"start": 172,
"end": 189,
"text": "Ito et al., 2018;",
"ref_id": "BIBREF26"
},
{
"start": 190,
"end": 210,
"text": "Causse et al., 2016;",
"ref_id": "BIBREF11"
},
{
"start": 211,
"end": 229,
"text": "Park et al., 2013)",
"ref_id": "BIBREF41"
},
{
"start": 421,
"end": 443,
"text": "Bernardy et al. (2018)",
"ref_id": "BIBREF5"
},
{
"start": 448,
"end": 473,
"text": "Bizzoni and Lappin (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We found that processing context induces a cognitive load for humans, which creates a compression effect on the distribution of acceptability ratings. We also showed that if the context is relevant to the sentence, a discourse coherence effect uniformly boosts sentence acceptability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Our language model experiments indicate that bidirectional models achieve better results than unidirectional models. The best bidirectional model performs at a human level, defining a new state-of-the art for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "In future work we will explore alternative ways to present sentences for acceptability judgments. We plan to extend TDLM, incorporating a bidirectional objective, as it shows significant promise. It will also be interesting to see if our observations generalize to other languages, and to different sorts of contexts, both linguistic and non-linguistic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "We use the pre-trained Moses models from http:// www.statmt.org/moses/RELEASE-4.0/models/ for translation.3 https://www.mturk.com/.4 We train a topic model with 50 topics on 15 K Wikipedia documents with Mallet(McCallum, 2002) and infer topics for the context paragraphs based on the trained model.5 Note that we do not ask the users to judge the naturalness of the sentence in context; the instructions they see for the naturalness rating task is the same as the first experiment.6 Sampled sentences are sequential, running sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use Stanford CoreNLP(Manning et al., 2014) to tokenize words and sentences. Rare words are replaced by a special UNK symbol.12 https://github.com/openai/gpt-2.13 Note that context is burdened with two senses in the paper. It can mean the preceding sentences of a target sentence, or the neighbouring words of a target word. The intended sense should be apparent from the usage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/google-research/bert.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/zihangdai/xlnet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Technically we can mask all right context words and predict the target words one at a time, but because the model is never trained in this way, we found that it performs poorly in preliminary experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/huggingface/pytorchtransformers. Specifically, we employ the following",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://skylion007.github.io/OpenWebTextCorpus/.23 We use the scripts in https://github.com/ soskek/bookcorpus to reproduce BookCorpus.24 XLNET also uses Giga5 and ClueWeb as part of its training data, but we think that our combined collection is sufficiently large to be representative of the original training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to three anonymous reviewers for helpful comments on earlier drafts of this paper. Some of the work described here was presented in talks in the seminar of the Centre for Linguistic Theory and Studies in Probability (CLASP), University of Gothenburg, December 2019, and in the Cambridge University Language Technology Seminar, February 2020. We thank the participants of both events for useful discussion.Lappin's work on the project was supported by grant 2014-39 from the Swedish Research Council, which funds CLASP. Armendariz and Purver were partially supported by the European Union's Horizon 2020 research and innovation programme under grant agreement no. 825153, project EMBEDDIA (Cross-Lingual Embeddings for Less-Represented Languages in European News Media). The results of this publication reflect only the authors' views and the Commission is not responsible for any use that may be made of the information it contains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "bert-largecased for BERT CS , bert-large-uncased for BERT UCS , and xlnet-large-cased for XLNET UNI /XLNET BI",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "pre-trained models: gpt2-medium for GPT2, bert-large- cased for BERT CS , bert-large-uncased for BERT UCS , and xlnet-large-cased for XLNET UNI /XLNET BI . 18 In the scenario where we include the context paragraph (e.g., XLNET + UNI ), the dummy context is added before it.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Core Syntax: A Minimalist Approach",
"authors": [
{
"first": "David",
"middle": [],
"last": "Adger",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Adger. 2003. Core Syntax: A Minimalist Approach, Oxford University Press, United Kingdom.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "ACT: A simple theory of complex cognition",
"authors": [
{
"first": "R",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Anderson",
"suffix": ""
}
],
"year": 1996,
"venue": "American Psychologist",
"volume": "51",
"issue": "",
"pages": "355--365",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John R. Anderson. 1996. ACT: A simple theory of complex cognition. American Psychologist, 51:355-365.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Logics of Conversation",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Asher and Alex Lascarides. 2003. Logics of Conversation, Cambridge University Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janvin",
"suffix": ""
}
],
"year": 2003,
"venue": "The Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137-1155.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The influence of context on sentence acceptability judgements",
"authors": [
{
"first": "Jean-Philippe",
"middle": [],
"last": "Bernardy",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": ""
},
{
"first": "Jey Han",
"middle": [],
"last": "Lau",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018)",
"volume": "",
"issue": "",
"pages": "456--461",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-Philippe Bernardy, Shalom Lappin, and Jey Han Lau. 2018. The influence of context on sentence acceptability judgements. In Proceed- ings of the 56th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL 2018), pages 456-461. Melbourne, Australia.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The cognitive basis for linguistic structures",
"authors": [
{
"first": "Thomas",
"middle": [
"G"
],
"last": "Bever",
"suffix": ""
}
],
"year": 1970,
"venue": "Cognition and the Development of Language",
"volume": "",
"issue": "",
"pages": "279--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas G. Bever. 1970, The cognitive basis for linguistic structures, J. R. Hayes, editor, Cognition and the Development of Language, Wiley, New York, pages 279-362.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The effect of context on metaphor paraphrase aptness judgments",
"authors": [
{
"first": "Yuri",
"middle": [],
"last": "Bizzoni",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Conference on Computational Semantics -Long Papers",
"volume": "",
"issue": "",
"pages": "165--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuri Bizzoni and Shalom Lappin. 2019. The effect of context on metaphor paraphrase aptness judgments. In Proceedings of the 13th International Conference on Computational Semantics -Long Papers, pages 165-175.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Judgments of grammaticality",
"authors": [
{
"first": "Dwight",
"middle": [],
"last": "Bolinger",
"suffix": ""
}
],
"year": 1968,
"venue": "Lingua",
"volume": "21",
"issue": "",
"pages": "34--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dwight Bolinger. 1968. Judgments of grammati- cality. Lingua, 21:34-40.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Context and well-formedness: the dynamics of ellipsis",
"authors": [
{
"first": "Ronnie",
"middle": [],
"last": "Cann",
"suffix": ""
},
{
"first": "Ruth",
"middle": [],
"last": "Kempson",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Purver",
"suffix": ""
}
],
"year": 2007,
"venue": "Research on Language and Computation",
"volume": "5",
"issue": "3",
"pages": "333--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronnie Cann, Ruth Kempson, and Matthew Purver. 2007. Context and well-formedness: the dynamics of ellipsis. Research on Language and Computation, 5(3):333-358.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "High working memory load impairs language processing during a simulated piloting task: An ERP and pupillometry study",
"authors": [
{
"first": "Micka\u00ebl",
"middle": [],
"last": "Causse",
"suffix": ""
},
{
"first": "Vsevolod",
"middle": [],
"last": "Peysakhovich",
"suffix": ""
},
{
"first": "Eve",
"middle": [
"F"
],
"last": "Fabre",
"suffix": ""
}
],
"year": 2016,
"venue": "Frontiers in Human Neuroscience",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micka\u00ebl Causse, Vsevolod Peysakhovich, and Eve F. Fabre. 2016. High working memory load impairs language processing during a simulated piloting task: An ERP and pupillometry study. Frontiers in Human Neuroscience, 10:240.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Anchoring and grammar effects in judgments of sentence acceptability",
"authors": [
{
"first": "Wayne",
"middle": [],
"last": "Cowart",
"suffix": ""
}
],
"year": 1994,
"venue": "Perceptual and Motor Skills",
"volume": "79",
"issue": "3",
"pages": "1171--1182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wayne Cowart. 1994. Anchoring and grammar effects in judgments of sentence acceptability. Perceptual and Motor Skills, 79(3):1171-1182.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Transformer-XL: Attentive language models beyond a fixed",
"authors": [
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer- XL: Attentive language models beyond a fixed-length context. CoRR, abs/1901.02860.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Fuzzy Grammar: A Reader",
"authors": [
{
"first": "David",
"middle": [],
"last": "Denison",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Denison. 2004. Fuzzy Grammar: A Reader, Oxford University Press, United Kingdom.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Structural priming in sentence comprehension: A single prime is enough",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Giavazzi",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Sambin",
"suffix": ""
},
{
"first": "Ruth",
"middle": [],
"last": "De Diego-Balaguer",
"suffix": ""
},
{
"first": "Lorna",
"middle": [
"Le"
],
"last": "Stanc",
"suffix": ""
},
{
"first": "Anne-Catherine",
"middle": [],
"last": "Bachoud-L\u00e9vi",
"suffix": ""
},
{
"first": "Charlotte",
"middle": [],
"last": "Jacquemot",
"suffix": ""
}
],
"year": 2018,
"venue": "PLoS ONE",
"volume": "13",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Giavazzi, Sara Sambin, Ruth de Diego- Balaguer, Lorna Le Stanc, Anne-Catherine Bachoud-L\u00e9vi, and Charlotte Jacquemot. 2018. Structural priming in sentence comprehen- sion: A single prime is enough. PLoS ONE, 13(4):e0194959.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The Interactive Stance: Meaning for Conversation",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Ginzburg",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Ginzburg. 2012. The Interactive Stance: Meaning for Conversation, Oxford University Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Translationese in machine translation evaluation",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Barry Haddow, and Philipp Koehn. 2019. Translationese in machine trans- lation evaluation. CoRR, abs/1906.09833.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Contextual influence on acceptability judgements. Linguistics",
"authors": [
{
"first": "Sidney",
"middle": [],
"last": "Greenbaum",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "15",
"issue": "",
"pages": "5--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sidney Greenbaum. 1976. Contextual influ- ence on acceptability judgements. Linguistics, 15(187):5-12.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dynamic Montague grammar",
"authors": [
{
"first": "Jeroen",
"middle": [],
"last": "Groenendijk",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Stokhof",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 2nd Symposium on Logic and Language",
"volume": "",
"issue": "",
"pages": "3--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeroen Groenendijk and Martin Stokhof. 1990. Dynamic Montague grammar. L. Kalman and L. Polos, editors, In Proceedings of the 2nd Symposium on Logic and Language, pages 3-48. Budapest.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The Semantics of Definite and Indefinite Noun Phrases",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Heim",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Heim. 1982. The Semantics of Definite and Indefinite Noun Phrases. Ph.D. thesis, Univer- sity of Massachusetts at Amherst.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "SimLex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "",
"pages": "665--695",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic mod- els with (genuine) similarity estimation. Com- putational Linguistics, 41:665-695.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9:1735-1780.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A cognitive load delays predictive eye movements similarly during L1 and L2 comprehension",
"authors": [
{
"first": "Aine",
"middle": [],
"last": "Ito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Corley",
"suffix": ""
},
{
"first": "Martin",
"middle": [
"J"
],
"last": "Pickering",
"suffix": ""
}
],
"year": 2018,
"venue": "Bilingualism: Language and Cognition",
"volume": "21",
"issue": "2",
"pages": "251--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aine Ito, Martin Corley, and Martin J. Pickering. 2018. A cognitive load delays predictive eye movements similarly during L1 and L2 compre- hension. Bilingualism: Language and Cogni- tion, 21(2):251-264.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "From Discourse To Logic",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "Kamp",
"suffix": ""
},
{
"first": "Uwe",
"middle": [],
"last": "Reyle",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans Kamp and Uwe Reyle. 1993. From Dis- course To Logic, Kluwer Academic Publishers.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Sharp nearby, fuzzy far away: How neural language models use context",
"authors": [
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "284--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 284-294. Association for Computational Linguistics, Melbourne, Australia.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language inde- pendent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Dem- onstrations, pages 66-71. Brussels, Belgium.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Topically driven neural language model",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "355--365",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, Timothy Baldwin, and Trevor Cohn. 2017a. Topically driven neural language model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 355-365.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Unsupervised prediction of acceptability judgements",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Joint conference of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1618--1628",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, Alexander Clark, and Shalom Lappin. 2015. Unsupervised prediction of acceptability judgements. In Proceedings of the Joint conference of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP 2015), pages 1618-1628.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Grammaticality, Acceptability, and Probability: A Probabilistic View of Linguistic Knowledge",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": 2017,
"venue": "Cognitive Science",
"volume": "41",
"issue": "",
"pages": "1202--1241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, Alexander Clark, and Shalom Lappin. 2017b. Grammaticality, Acceptability, and Probability: A Probabilistic View of Linguistic Knowledge. Cognitive Science, 41:1202-1241.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Rhetorical structure theory: Toward a functional theory of text organization",
"authors": [
{
"first": "William",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "Text",
"volume": "8",
"issue": "3",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Mann and Sandra Thompson. 1988. Rhetorical structure theory: Toward a func- tional theory of text organization. Text, 8(3):243-281.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Asso- ciation for Computational Linguistics (ACL) System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Mallet: A machine learning for language toolkit",
"authors": [
{
"first": "Andrew Kachites",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. http:// mallet.cs.umass.edu.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Jan\u010dernock\u00fd",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "1045--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Jan\u010cernock\u00fd, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH 2010, 11th Annual Conference of the International Speech Com- munication Association, pages 1045-1048.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Analysis of cognitive load for language processing based on brain activities",
"authors": [
{
"first": "Hyangsook",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Jun-Su",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Sungmook",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Minho",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2013,
"venue": "Neural Information Processing",
"volume": "",
"issue": "",
"pages": "561--568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyangsook Park, Jun-Su Kang, Sungmook Choi, and Minho Lee. 2013. Analysis of cognitive load for language processing based on brain activities. In Neural Information Processing, pages 561-568. Springer Berlin Heidelberg, Berlin, Heidelberg.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Large-scale syntactic language modeling with treelets",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Pauls",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "959--968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Pauls and Dan Klein. 2012. Large-scale syntactic language modeling with treelets. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 959-968. Jeju Island, Korea.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A computational cognitive model of syntactic priming",
"authors": [
{
"first": "Daivd",
"middle": [],
"last": "Reitter",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2011,
"venue": "Cognitive Science",
"volume": "35",
"issue": "4",
"pages": "587--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daivd Reitter, Frank Keller, and Johanna D. Moore. 2011. A computational cognitive model of syntactic priming. Cognitive Science, 35(4):587-637.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725. Berlin, Germany.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Gradience in linguistic data",
"authors": [
{
"first": "Antonella",
"middle": [],
"last": "Sorace",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2005,
"venue": "Lingua",
"volume": "115",
"issue": "",
"pages": "1497--1524",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonella Sorace and Frank Keller. 2005. Gradience in linguistic data. Lingua, 115:1497-1524.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Continuous acceptability, categorical grammaticality, and experimental syntax",
"authors": [
{
"first": "Jon",
"middle": [],
"last": "Sprouse",
"suffix": ""
}
],
"year": 2007,
"venue": "Biolinguistics",
"volume": "",
"issue": "",
"pages": "1123--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jon Sprouse. 2007. Continuous acceptability, categorical grammaticality, and experimental syntax. Biolinguistics, 1123-134.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Cognitive load during problem solving: Effects on learning",
"authors": [
{
"first": "John",
"middle": [],
"last": "Sweller",
"suffix": ""
}
],
"year": 1988,
"venue": "Cognitive Science",
"volume": "12",
"issue": "2",
"pages": "257--285",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Sweller. 1988. Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2):257-285.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998-6008.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "BERT has a mouth, and it must speak: BERT as a Markov random field language model",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation",
"volume": "",
"issue": "",
"pages": "30--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov random field language model. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 30-36. Association for Computational Linguistics, Minneapolis, Minnesota.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Context and distance-to-disambiguation effects in ambiguity resolution: Evidence from grammaticality judgments of garden path sentences",
"authors": [
{
"first": "John",
"middle": [],
"last": "Warner",
"suffix": ""
},
{
"first": "Arnold",
"middle": [
"L"
],
"last": "Glass",
"suffix": ""
}
],
"year": 1987,
"venue": "Journal of Memory and Language",
"volume": "26",
"issue": "6",
"pages": "714--738",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Warner and Arnold L. Glass. 1987. Context and distance-to-disambiguation effects in ambi- guity resolution: Evidence from grammaticality judgments of garden path sentences. Journal of Memory and Language, 26(6):714 -738.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Oriol Vinyals",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "XLNet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books",
"authors": [
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the 2015 IEEE Inter- national Conference on Computer Vision (ICCV), pages 19-27. Washington, DC, USA.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Effects of linguistic context on the acceptability of co-speech gestures",
"authors": [
{
"first": "Christina",
"middle": [],
"last": "Zlogar",
"suffix": ""
},
{
"first": "Kathryn",
"middle": [],
"last": "Davidson",
"suffix": ""
}
],
"year": 2018,
"venue": "Glossa",
"volume": "3",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christina Zlogar and Kathryn Davidson. 2018. Effects of linguistic context on the acceptability of co-speech gestures. Glossa, 3(1):73.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"text": "Language models and their configurations.",
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF3": {
"num": null,
"text": "Modeling results. Boldface indicates optimal performance in each row.",
"html": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}