Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q19-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:09:12.657564Z"
},
"title": "Tabula Nearly Rasa: Probing the Linguistic Knowledge of Character-level Neural Language Models Trained on Unsegmented Text",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Hahn",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recurrent neural networks (RNNs) have reached striking performance in many natural language processing tasks. This has renewed interest in whether these generic sequence processing devices are inducing genuine linguistic knowledge. Nearly all current analytical studies, however, initialize the RNNs with a vocabulary of known words, and feed them tokenized input during training. We present a multilingual study of the linguistic knowledge encoded in RNNs trained as character-level language models, on input data with word boundaries removed. These networks face a tougher and more cognitively realistic task, having to discover any useful linguistic unit from scratch based on input statistics. The results show that our ''near tabula rasa'' RNNs are mostly able to solve morphological, syntactic and semantic tasks that intuitively presuppose word-level knowledge, and indeed they learned, to some extent, to track word boundaries. Our study opens the door to speculations about the necessity of an explicit, rigid word lexicon in language learning and usage.",
"pdf_parse": {
"paper_id": "Q19-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "Recurrent neural networks (RNNs) have reached striking performance in many natural language processing tasks. This has renewed interest in whether these generic sequence processing devices are inducing genuine linguistic knowledge. Nearly all current analytical studies, however, initialize the RNNs with a vocabulary of known words, and feed them tokenized input during training. We present a multilingual study of the linguistic knowledge encoded in RNNs trained as character-level language models, on input data with word boundaries removed. These networks face a tougher and more cognitively realistic task, having to discover any useful linguistic unit from scratch based on input statistics. The results show that our ''near tabula rasa'' RNNs are mostly able to solve morphological, syntactic and semantic tasks that intuitively presuppose word-level knowledge, and indeed they learned, to some extent, to track word boundaries. Our study opens the door to speculations about the necessity of an explicit, rigid word lexicon in language learning and usage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recurrent neural networks (RNNs; Elman, 1990) , in particular in their long short term memory variant (LSTMs; Hochreiter and Schmidhuber, 1997) , are widely used in natural language processing. RNNs, often pre-trained on the simple language modeling objective of predicting the next symbol in natural text, are a crucial component of state-of-the-art architectures for machine translation, natural language inference, and text categorization (Goldberg, 2017) .",
"cite_spans": [
{
"start": 33,
"end": 45,
"text": "Elman, 1990)",
"ref_id": "BIBREF21"
},
{
"start": 110,
"end": 143,
"text": "Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF34"
},
{
"start": 442,
"end": 458,
"text": "(Goldberg, 2017)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "RNNs are very general devices for sequence processing, hardly assuming any prior linguistic knowledge. Moreover, the simple prediction task they are trained on in language modeling is wellattuned to the core role that prediction plays in cognition (e.g., Bar, 2007; Clark, 2016) . RNNs have thus long attracted researchers interested in language acquisition and processing. Their recent success in large-scale tasks has rekindled this interest (e.g., Frank et al., 2013; Lau et al., 2017; Kirov and Cotterell, 2018; McCoy et al., 2018; Pater, 2018) .",
"cite_spans": [
{
"start": 255,
"end": 265,
"text": "Bar, 2007;",
"ref_id": "BIBREF2"
},
{
"start": 266,
"end": 278,
"text": "Clark, 2016)",
"ref_id": "BIBREF14"
},
{
"start": 451,
"end": 470,
"text": "Frank et al., 2013;",
"ref_id": "BIBREF23"
},
{
"start": 471,
"end": 488,
"text": "Lau et al., 2017;",
"ref_id": "BIBREF47"
},
{
"start": 489,
"end": 515,
"text": "Kirov and Cotterell, 2018;",
"ref_id": "BIBREF45"
},
{
"start": 516,
"end": 535,
"text": "McCoy et al., 2018;",
"ref_id": "BIBREF53"
},
{
"start": 536,
"end": 548,
"text": "Pater, 2018)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The standard pre-processing pipeline of modern RNNs assumes that the input has been tokenized into word units that are pre-stored in the RNN vocabulary (Goldberg, 2017) . This is a reasonable practical approach, but it makes simulations less interesting from a linguistic point of view. First, discovering words (or other primitive constituents of linguistic structure) is one of the major challenges a learner faces, and by pre-encoding them in the RNN we are facilitating its task in an unnatural way (not even the staunchest nativists would take specific word dictionaries to be part of our genetic code). Second, assuming a unique tokenization into a finite number of discrete word units is in any case problematic. The very notion of what counts as a word in languages with a rich morphology is far from clear (e.g., Dixon and Aikhenvald, 2002; Bickel and Z\u00fa\u00f1iga, 2017) , and, universally, lexical knowledge is probably organized into a not-necessarily-consistent hierarchy of units at different levels: morphemes, words, compounds, constructions, and so forth (e.g., Goldberg, 2005) . Indeed, it has been suggested that the notion of word cannot even be meaningfully defined cross-linguistically (Haspelmath, 2011) .",
"cite_spans": [
{
"start": 152,
"end": 168,
"text": "(Goldberg, 2017)",
"ref_id": "BIBREF29"
},
{
"start": 822,
"end": 849,
"text": "Dixon and Aikhenvald, 2002;",
"ref_id": null
},
{
"start": 850,
"end": 874,
"text": "Bickel and Z\u00fa\u00f1iga, 2017)",
"ref_id": "BIBREF4"
},
{
"start": 1073,
"end": 1088,
"text": "Goldberg, 2005)",
"ref_id": "BIBREF28"
},
{
"start": 1202,
"end": 1220,
"text": "(Haspelmath, 2011)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Motivated by these considerations, we study here RNNs that are trained without any notion of word in their input or in their architecture. We train our RNNs as character-level neural language models (CNLMs, Mikolov et al., 2011; Graves, 2014) by removing whitespace from their input, so that, like children learning a language, they don't have access to explicit cues to wordhood. 1 This set-up is almost as tabula rasa as it gets. By using unsegmented orthographic input (and assuming that, in the alphabetic writing systems we work with, there is a reasonable correspondence between letters and phonetic segments), we are only postulating that the learner figured out how to map the continuous speech stream to a sequence of phonological units, an ability children already possess a few months after birth (e.g., Maye et al., 2002; Kuhl, 2004) . We believe that focusing on language modeling of an unsegmented phoneme sequence, abstracting away from other complexities of a fully realistic child language acquisition set-up, is particularly instructive in order to study which linguistic structures naturally emerge.",
"cite_spans": [
{
"start": 199,
"end": 228,
"text": "(CNLMs, Mikolov et al., 2011;",
"ref_id": null
},
{
"start": 229,
"end": 242,
"text": "Graves, 2014)",
"ref_id": "BIBREF31"
},
{
"start": 815,
"end": 833,
"text": "Maye et al., 2002;",
"ref_id": "BIBREF52"
},
{
"start": 834,
"end": 845,
"text": "Kuhl, 2004)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our character-level networks on a bank of linguistic tests in German, Italian, and English. We focus on these languages because of resource availability and ease of benchmark construction. Also, well-studied synthetic languages with a clear, orthographically driven notion of word might be a better starting point to test nonword-centric models, compared with agglutinative or polysynthetic languages, where the very notion of what counts as a word is problematic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our tasks require models to develop the latent ability to parse characters into word-like items associated to morphological, syntactic, and broadly semantic features. The RNNs pass most of the tests, suggesting that they are in some way able to construct and manipulate the right lexical objects. In a final experiment, we look more directly into how the models are handling word-like units. We find, confirming an earlier observation by Kementchedjhieva and Lopez (2018) , that the RNNs specialized some cells to the task of detecting word boundaries (or, more generally, salient linguistic boundaries, in a sense to be further discussed below). Taken together, our results suggest that character-level RNNs capture forms of linguistic knowledge that are traditionally thought to be word-based, without being exposed to an explicit segmentation of their input and, more importantly, without possessing an explicit word lexicon. We will discuss the implications of these findings in the Discussion. 2",
"cite_spans": [
{
"start": 438,
"end": 471,
"text": "Kementchedjhieva and Lopez (2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the primacy of words Several linguistic studies suggest that words, at least as delimited by whitespace in some writing systems, are neither necessary nor sufficient units of linguistic analysis. Haspelmath (2011) claims that there is no cross-linguistically valid definition of the notion of word (see also Schiering et al., 2010 , who address specifically the notion of prosodic word). Others have stressed the difficulty of characterizing words in polysynthetic languages (Bickel and Z\u00fa\u00f1iga, 2017) . Children are only rarely exposed to words in isolation during learning (Tomasello, 2003) , 3 and it is likely that the units that adult speakers end up storing in their lexicon are of variable size, both smaller and larger than conventional words (e.g., Jackendoff, 2002; Goldberg, 2005) . From a more applied perspective, Sch\u00fctze (2017) recently defended tokenization-free approaches to NLP, proposing a general non-symbolic approach to text representation.",
"cite_spans": [
{
"start": 199,
"end": 216,
"text": "Haspelmath (2011)",
"ref_id": "BIBREF33"
},
{
"start": 311,
"end": 333,
"text": "Schiering et al., 2010",
"ref_id": "BIBREF65"
},
{
"start": 478,
"end": 503,
"text": "(Bickel and Z\u00fa\u00f1iga, 2017)",
"ref_id": "BIBREF4"
},
{
"start": 577,
"end": 594,
"text": "(Tomasello, 2003)",
"ref_id": "BIBREF72"
},
{
"start": 597,
"end": 598,
"text": "3",
"ref_id": null
},
{
"start": 760,
"end": 777,
"text": "Jackendoff, 2002;",
"ref_id": "BIBREF37"
},
{
"start": 778,
"end": 793,
"text": "Goldberg, 2005)",
"ref_id": "BIBREF28"
},
{
"start": 829,
"end": 843,
"text": "Sch\u00fctze (2017)",
"ref_id": "BIBREF67"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We hope our results will contribute to the theoretical debate on word primacy, suggesting, through computational simulations, that word priors are not crucial to language learning and processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Character-based neural language models received attention in the last decade because of their greater generality compared with word-level models. Early studies (Mikolov et al., 2011; Graves, 2014) established that CNLMs might not be as good at language modeling as their word-based counterparts, but lag only slightly behind. This is particularly encouraging in light of the fact that character-level sentence prediction involves a much larger search space than prediction at the word level, as a character-level model must make a prediction after each character, rather than after each word. and Graves (2014) ran qualitative analyses showing that CNLMs capture some basic linguistic properties of their input. The latter, who used LSTM cells, also showed, qualitatively, that CNLMs are sensitive to hierarchical structure. In particular, they balance parentheses correctly when generating text.",
"cite_spans": [
{
"start": 160,
"end": 182,
"text": "(Mikolov et al., 2011;",
"ref_id": "BIBREF59"
},
{
"start": 183,
"end": 196,
"text": "Graves, 2014)",
"ref_id": "BIBREF31"
},
{
"start": 597,
"end": 610,
"text": "Graves (2014)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Most recent work in the area has focused on character-aware architectures combining characterand word-level information to develop state-ofthe-art language models that are also effective in morphologically rich languages (e.g., Bojanowski et al., 2016; Kim et al., 2016; Gerz et al., 2018) . For example, Kim and colleagues perform prediction at the word level, but use a character-based convolutional network to generate word representations. Other work focuses on splitting words into morphemes, using character-level RNNs and an explicit segmentation objective (e.g., Kann et al., 2016) . These latter lines of work are only distantly related to our interest in probing what a purely character-level network trained on running text has implicitly learned about linguistic structure. There is also extensive work on segmentation of the linguistic signal that does not rely on neural methods, and is not directly relevant here, (e.g., Brent and Cartwright, 1996; Goldwater et al., 2009; Kamper et al., 2016 , and references therein).",
"cite_spans": [
{
"start": 228,
"end": 252,
"text": "Bojanowski et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 253,
"end": 270,
"text": "Kim et al., 2016;",
"ref_id": "BIBREF44"
},
{
"start": 271,
"end": 289,
"text": "Gerz et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 571,
"end": 589,
"text": "Kann et al., 2016)",
"ref_id": "BIBREF41"
},
{
"start": 936,
"end": 963,
"text": "Brent and Cartwright, 1996;",
"ref_id": "BIBREF7"
},
{
"start": 964,
"end": 987,
"text": "Goldwater et al., 2009;",
"ref_id": "BIBREF30"
},
{
"start": 988,
"end": 1007,
"text": "Kamper et al., 2016",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Probing linguistic knowledge of neural language models is currently a popular research topic (Li et al., 2016; Linzen et al., 2016; Shi et al., 2016; Adi et al., 2017; K\u00e0d\u00e0r et al., 2017; Hupkes et al., 2018; Conneau et al., 2018; Ettinger et al., 2018; . Among studies focusing on character-level models, Elman (1990) already reported a proofof-concept experiment on implicit learning of word segmentation. Christiansen et al. (1998) trained a RNN on phoneme-level language modeling of transcribed child-directed speech with tokens marking utterance boundaries, and found that the network learned to segment the input by predicting the utterance boundary symbol also at word edges. More recently, Sennrich (2017) explored the grammatical properties of characterand subword-unit-level models that are used as components of a machine translation system. He concluded that current character-based decoders generalize better to unseen words, but capture less grammatical knowledge than subword units. Still, his character-based systems lagged only marginally behind the subword architectures on grammatical tasks such as handling agreement and negation. Radford et al. (2017) focused on CNLMs deployed in the domain of sentiment analysis, where they found the network to specialize a unit for sentiment tracking. We will discuss below how our CNLMs also show single-unit specialization, but for boundary tracking. Godin et al. (2018) investigated the rules implicitly used by supervised character-aware neural morphological segmentation methods, finding linguistically sensible patterns. probed the linguistic knowledge induced by a neural network that receives unsegmented acoustic input. Focusing on phonology, they found that the lower layers of the model process finer-grained information, whereas higher layers are sensitive to more abstract patterns. Kementchedjhieva and Lopez (2018) recently probed the linguistic knowledge of an English CNLM trained with whitespace in the input. Their results are aligned with ours. The model is sensitive to lexical and morphological structure, and it captures morphosyntactic categories as well as constraints on possible morpheme combinations. Intriguingly, the model tracks word/morpheme boundaries through a single specialized unit, suggesting that such boundaries are salient (at least when marked by whitespace, as in their experiments) and informative enough that it is worthwhile for the network to devote a special mechanism to process them. We replicated this finding for our networks trained on whitespacefree text, as discussed in Section 4.4, where we discuss it in the context of our other results.",
"cite_spans": [
{
"start": 93,
"end": 110,
"text": "(Li et al., 2016;",
"ref_id": "BIBREF49"
},
{
"start": 111,
"end": 131,
"text": "Linzen et al., 2016;",
"ref_id": "BIBREF51"
},
{
"start": 132,
"end": 149,
"text": "Shi et al., 2016;",
"ref_id": "BIBREF69"
},
{
"start": 150,
"end": 167,
"text": "Adi et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 168,
"end": 187,
"text": "K\u00e0d\u00e0r et al., 2017;",
"ref_id": "BIBREF39"
},
{
"start": 188,
"end": 208,
"text": "Hupkes et al., 2018;",
"ref_id": "BIBREF35"
},
{
"start": 209,
"end": 230,
"text": "Conneau et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 231,
"end": 253,
"text": "Ettinger et al., 2018;",
"ref_id": "BIBREF22"
},
{
"start": 306,
"end": 318,
"text": "Elman (1990)",
"ref_id": "BIBREF21"
},
{
"start": 408,
"end": 434,
"text": "Christiansen et al. (1998)",
"ref_id": "BIBREF13"
},
{
"start": 698,
"end": 713,
"text": "Sennrich (2017)",
"ref_id": "BIBREF68"
},
{
"start": 1151,
"end": 1172,
"text": "Radford et al. (2017)",
"ref_id": "BIBREF62"
},
{
"start": 1411,
"end": 1430,
"text": "Godin et al. (2018)",
"ref_id": "BIBREF27"
},
{
"start": 1854,
"end": 1887,
"text": "Kementchedjhieva and Lopez (2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We extracted plain text from full English, German, and Italian Wikipedia dumps with WikiExtractor. 4 We randomly selected test and validation sections consisting of 50,000 paragraphs each, and used the remainder for training. The training sets contained 16M (German), 9M (Italian), and 41M (English) paragraphs, corresponding to 819M, 463M, and 2,333M words, respectively. Paragraph order was shuffled for training, without attempting to split by sentences. All characters were lower-cased. For benchmark construction and word-based model training, we tokenized and tagged the corpora with TreeTagger (Schmid, 1999) . 5 We used as vocabularies the most frequent characters from each corpus, setting thresholds so as to ensure that all characters representing phonemes were included, resulting in vocabularies of sizes 60 (English), 73 (German), and 59 (Italian). We further constructed word-level neural language models (WordNLMs); their vocabulary included the most frequent 50,000 words per corpus.",
"cite_spans": [
{
"start": 99,
"end": 100,
"text": "4",
"ref_id": null
},
{
"start": 601,
"end": 615,
"text": "(Schmid, 1999)",
"ref_id": "BIBREF66"
},
{
"start": 618,
"end": 619,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Set-up",
"sec_num": "3"
},
{
"text": "We trained RNN and LSTM CNLMs; we will refer to them simply as RNN and LSTM, respectively. The ''vanilla'' RNN will serve as a baseline to ascertain if/when the longer-range information-tracking abilities afforded to the LSTM by its gating mechanisms are necessary. Our WordNLMs are always LSTMs. For each model/language, we applied random hyperparameter search. We terminated training after 72 hours. 6 None of the models had overfitted, as measured by performance on the validation set. 7 Language modeling performance on the test partitions is shown in Table 1 . Recall that we removed whitespace, which is both easy to predict, and aids prediction of other characters. Consequently, the fact that our character-level models are below the state of the art is expected. 8 For example, the best model of Merity et al. (2018) 2018report 51.9 and 44.9 perplexities, respectively, in English and Italian for their best LSTMs trained on Wikipedia data with same vocabulary size as ours.",
"cite_spans": [
{
"start": 489,
"end": 490,
"text": "7",
"ref_id": null
},
{
"start": 805,
"end": 825,
"text": "Merity et al. (2018)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [
{
"start": 556,
"end": 563,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Set-up",
"sec_num": "3"
},
{
"text": "Words belong to part-of-speech categories, such as nouns and verbs. Moreover, they typically carry inflectional features such as number. We start by probing whether CNLMs capture such properties. We use here the popular method of ''diagnostic classifiers'' (Hupkes et al., 2018) . That is, we treat the hidden activations produced by a CNLM whose weights were fixed after language model training as input features for a shallow (logistic) classifier of the property of interest (e.g., plural vs. singular). If the classifier is successful, this means that the representations provided by the model are encoding the relevant information. The classifier is deliberately shallow and trained on a small set of examples, as we want to test whether the properties of interest are robustly encoded in the representations produced by the CNLMs, and amenable to a simple linear readout (Fusi et al., 2016) . In our case, we want to probe word-level properties in models trained at the character level.",
"cite_spans": [
{
"start": 257,
"end": 278,
"text": "(Hupkes et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 877,
"end": 896,
"text": "(Fusi et al., 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discovering morphological categories",
"sec_num": "4.1"
},
{
"text": "To do this, we let the model read each target word character-by-character, and we treat the state of its hidden layer after processing the last character in the word as the model's implicit representation of the word, on which we train the diagnostic classifier. The experiments focus on German and Italian, as it is harder to design reliable test sets for the impoverished English morphological system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discovering morphological categories",
"sec_num": "4.1"
},
{
"text": "Word classes (nouns vs. verbs) For both German and Italian, we sampled 500 verbs and 500 nouns from the Wikipedia training sets, requiring that they are unambiguously tagged in the corpus by TreeTagger. Verbal and nominal forms are often cued by suffixes. We removed this confound by selecting examples with the same ending across the two categories (-en in German: Westen 'west', 9 stehen 'to stand'; and -re in Italian: autore 'author', dire 'to say'). We randomly selected 20 training examples (10 nouns and 10 verbs), and tested on the remaining items. We repeated the experiment 100 times to account for random train-test split variation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discovering morphological categories",
"sec_num": "4.1"
},
{
"text": "Although we controlled for suffixes as described above, it could still be the case that other substrings reliably cue verbs or nouns. We thus considered a baseline trained on word-internal information only, namely, a character-level LSTM autoencoder trained on the Wikipedia datasets to reconstruct words in isolation. 10 The hidden state of the LSTM autoencoder should capture discriminating orthographic features, but, by design, will have no access to broader contexts. We further considered word embeddings from the output layer of the WordNLM. Unlike CNLMs, the WordNLM cannot make educated guesses about words that are not in its training vocabulary. These out of vocabulary (OOV) words are by construction less frequent, and thus likely to be in general more difficult. To get a sense of both ''bestcase scenario'' and more realistic WordNLM performance, we report its accuracy both excluding and including OOV items (WordNLM subs. and WordNLM in Table 2 , respectively). In the latter case, we let the model make a random guess for OOV items. The percentage of OOV items over the entire dataset, balanced for nouns and verbs, was 92.3% for German and 69.4% for Italian. Note that none of the words were OOV 9 German nouns are capitalized; this cue is unavailable to the CNLM as we lower-case the input.",
"cite_spans": [
{
"start": 319,
"end": 321,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 954,
"end": 961,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Discovering morphological categories",
"sec_num": "4.1"
},
{
"text": "10 The autoencoder is implemented as a standard LSTM sequence-to-sequence model (Sutskever et al., 2014) . For each language, autoencoder hyperparameters were chosen using random search, as for the language models; details are in supplementary material to be made available upon publication. For both German and Italian models, the following parameters were chosen: 2 layers, 100 embedding dimensions, 1024 hidden dimensions. for the CNLM, as they all were taken from the Wikipedia training set. Results are in Table 2 . All language models outperform the autoencoders, showing that they learned categories based on broader distributional evidence, not just typical strings cuing nouns and verbs. Moreover, the LSTM CNLM outperforms the RNN, probably because it can track broader contexts. Not surprisingly, the word-based model fares better on in-vocabulary words, but the gap, especially in Italian, is rather narrow, and there is a strong negative impact of OOV words (as expected, given that WordNLM is at random on them).",
"cite_spans": [
{
"start": 80,
"end": 104,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF71"
}
],
"ref_spans": [
{
"start": 511,
"end": 518,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Discovering morphological categories",
"sec_num": "4.1"
},
{
"text": "We turn next to number, a more granular morphological feature. We study German, as it possesses a rich system of nominal classes forming plural through different morphological processes. We train a diagnostic number classifier on a subset of these classes, and test on the others, in order to probe the abstract number generalization capabilities of the tested models. If a model generalizes correctly, it means that the CNLM is sensitive to number as an abstract feature, independently of its surface expression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number",
"sec_num": null
},
{
"text": "We extracted plural nouns from the Wiktionary and the German UD treebank (McDonald et al., 2013; Brants et al., 2002) . We selected nouns with plurals in -n, -s, or -e to train the classifier (e.g., Geschichte(n) 'story(-ies)', Radio(s) 'radio(s)', Pferd(e) 'horse(s)', respectively). We tested on plurals formed with -r (e.g., Lieder for singular Lied 'song'), or through vowel change (Umlaut, e.g.,\u00c4pfel from singular Apfel 'apple'). Certain nouns form plurals through concurrent suffixing and Umlaut. We grouped these together with nouns using the same suffix, reserving the Umlaut group for nouns only undergoing vowel change train classes test classes -n/-s/-e -r Umlaut Random 50.0 50.0 50.0 Autoencoder 61.4 (\u00b1 0.9) 50.7 (\u00b1 0.8) 51.9 (\u00b1 0.4) LSTM 71.5 (\u00b1 0.8) 78.8 (\u00b1 0.6) 60.8 (\u00b1 0.6) RNN 65.4 (\u00b1 0.9) 59.8 (\u00b1 1.0) 56.7 (\u00b1 0.7) WordNLM 77.3 (\u00b1 0.7) 77.1 (\u00b1 0.5) 74.2 (\u00b1 0.6) WordNLM subs. 97.1 (\u00b1 0.3) 90.7 (\u00b1 0.1) 97.5 (\u00b1 0.1) (e.g., Saft/S\u00e4fte 'juice(s)' would be an instance of -e suffixation). The diagnostic classifier was trained on 15 singulars and plurals randomly selected from each training class. As plural suffixes make words longer, we sampled singulars and plurals from a single distribution over lengths, to ensure that their lengths were approximately matched. Moreover, because in uncontrolled samples from our training classes a final -e-vowel would constitute a strong surface cue to plurality, we balanced the distribution of this property across singulars and plurals in the samples. For the test set, we selected all plurals in -r (127) or Umlaut (38), with their respective singulars. We also used all remaining plurals ending in -n (1,467), -s (98), and -e (832) as in-domain test data. To control for the impact of training sample selection, we report accuracies averaged over 200 random train-test splits and standard errors over these splits. For WordNLM OOV, there were 45.0% OOVs in the training classes, 49.1% among the -r forms, and 52.1% for Umlaut.",
"cite_spans": [
{
"start": 73,
"end": 96,
"text": "(McDonald et al., 2013;",
"ref_id": "BIBREF54"
},
{
"start": 97,
"end": 117,
"text": "Brants et al., 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Number",
"sec_num": null
},
{
"text": "Results are in Table 3 . The classifier based on word embeddings is the most successful. It outperforms in most cases the best CNLM even in the more cogent OOV-inclusive evaluation. This confirms the common observation that word embeddings reliably encode number (Mikolov et al., 2013b) . Again, the LSTM-based CNLM is better than the RNN, but both significantly outperform the autoencoder. The latter is nearrandom on new class prediction, confirming that we properly controlled for orthographic confounds.",
"cite_spans": [
{
"start": 263,
"end": 286,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Number",
"sec_num": null
},
{
"text": "We observe a considerable drop in the LSTM CNLM performance between generalization to -r and Umlaut. On the one hand, the fact that performance is still clearly above chance (and autoencoder) in the latter condition shows that the LSTM CNLM has a somewhat abstract notion of number not tied to specific orthographic exponents. On the other, the -r vs. Umlaut difference suggests that the generalization is not completely abstract, as it works more reliably when the target is a new suffixation pattern, albeit one that is distinct from those seen in training, than when it is a purely non-concatenative process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number",
"sec_num": null
},
{
"text": "Words encapsulate linguistic information into units that are then put into relation by syntactic rules. A long tradition in linguistics has even claimed that syntax is blind to sub-word-level processes (e.g., Chomsky, 1970; Di Sciullo and Williams, 1987; Bresnan and Mchombo, 1995; Williams, 2007) . Can our CNLMs, despite the lack of an explicit word lexicon, capture relational syntactic phenomena, such as agreement and case assignment? We investigate this by testing them on syntactic dependencies between non-adjacent words. We adopt the ''grammaticality judgment'' paradigm of Linzen et al. (2016) . We create minimal sets of grammatical and ungrammatical phrases illustrating the phenomenon of interest, and let the language model assign a likelihood to all items in the set. The language model is said to ''prefer'' the grammatical variant if it assigns a higher likelihood to it than to its ungrammatical counterparts. We must stress two methodological points. First, because a character-level language model assigns a probability to each character of a phrase, and the phrase likelihood is the product of these values (all between 0 and 1), minimal sets must be controlled for character length. This makes existing benchmarks unusable. Second, the ''distance'' of a relation is defined differently for a character-level model, and it is not straightforward to quantify. Consider the German phrase in Example (1) below. For a word model, two items separate the article from the noun. For a (space-less) character model, eight characters intervene until the noun onset, but the span to consider will typically be longer. For example, Baum could be the beginning of the feminine noun Baumwolle 'cotton', which would change the agreement requirements on the article. So, until the model finds evidence that it fully parsed the head noun, it cannot reliably check agreement. This will typically require parsing at least the full noun and the first character following it. We again focus on German and Italian, as their richer inflectional morphology simplifies the task of constructing balanced minimal sets.",
"cite_spans": [
{
"start": 209,
"end": 223,
"text": "Chomsky, 1970;",
"ref_id": "BIBREF11"
},
{
"start": 224,
"end": 254,
"text": "Di Sciullo and Williams, 1987;",
"ref_id": "BIBREF18"
},
{
"start": 255,
"end": 281,
"text": "Bresnan and Mchombo, 1995;",
"ref_id": "BIBREF9"
},
{
"start": 282,
"end": 297,
"text": "Williams, 2007)",
"ref_id": "BIBREF73"
},
{
"start": 583,
"end": 603,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Capturing syntactic dependencies",
"sec_num": "4.2"
},
{
"text": "Article-noun gender agreement Each German noun belongs to one of three genders (masculine, feminine, neuter), morphologically marked on the article. As the article and the noun can be separated by adjectives and adverbs, we can probe knowledge of lexical gender together with longdistance agreement. We create stimuli of the form 1{der, die, das} the sehr very rote red",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "German",
"sec_num": "4.2.1"
},
{
"text": "Baum tree",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "German",
"sec_num": "4.2.1"
},
{
"text": "where the correct nominative singular article (der, in this case) matches the gender of the noun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "German",
"sec_num": "4.2.1"
},
{
"text": "We then run the CNLM on the three versions of this phrase (removing whitespace) and record the probabilities it assigns to them. If the model assigns the highest probability to the version with the right article, we count it as a hit for the model. To avoid phrase segmentation ambiguities (as in the Baum/Baumwolle example above), we present phrases surrounded by full stops. To build the test set, we select all 4,581 nominative singular nouns from the German UD treebank: 49.3% feminine, 26.4% masculine, and 24.3% neuter. WordNLM OOV noun ratios are: 40.0% for masculine, 36.2% for feminine, and 41.5% for neuter. We construct four conditions varying the number of adverbs and adjectives between article and noun. We first consider stimuli where no material intervenes. In the second condition, an adjective with the correct case ending, randomly selected from the training corpus, is added. Crucially, the ending of the adjective does not reveal the gender of the noun. We only used adjectives occurring at least 100 times, and not ending in -r. 11 We obtained a pool of 9,742 adjectives to sample from, also used in subsequent experiments. A total of 74.9% of these were OOV for the WordNLM. In the third and fourth conditions, one (sehr) or two adverbs (sehr extrem) intervene between article and adjective. These do not cue gender either. We obtained 2,290 (m.), 2,261 (f.), and 1,111 (n.) stimuli, respectively. To control for surface co-occurrence statistics in the input, we constructed an n-gram baseline picking the article most frequently occurring before the phrase in the training data, breaking ties randomly. OOVs were excluded from WordNLM evaluation, resulting in an easier test for this rival model. However, here and in the next two tasks, CNLM performance on this reduced set was only slightly better, and we do not report it. We report accuracy averaged over nouns belonging to each of the three genders. By design, the random baseline accuracy is 33%.",
"cite_spans": [
{
"start": 1051,
"end": 1053,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "German",
"sec_num": "4.2.1"
},
{
"text": "Results are presented in Figure 1 (left) . WordNLM performs best, followed by the LSTM CNLM. The n-gram baseline performs similarly to the CNLM when there is no intervening material, which is expected, as a noun will often be preceded by its article in the corpus. However, its accuracy drops to chance level (0.33) in the presence of an adjective, whereas the CNLM is still able to track agreement. The RNN variant is much worse. It is outperformed by the n-gram model in the adjacent condition, and it drops to random accuracy as more material intervenes. We emphasized at the outset of this section that CNLMs must track agreement across much wider spans than word-based models. The LSTM variant ability to preserve information for longer might play a crucial role here.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 40,
"text": "Figure 1 (left)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "German",
"sec_num": "4.2.1"
},
{
"text": "Article-noun case agreement We selected the two determiners dem and des, which unambiguously indicate dative and genitive case, respectively, for masculine and neuter nouns:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "German",
"sec_num": "4.2.1"
},
{
"text": "(2) a. {dem, des} sehr roten Baum the very red tree (dative) b. {dem, des} sehr roten Baums the very red tree (genitive)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "German",
"sec_num": "4.2.1"
},
{
"text": "We selected all noun lemmas of the appropriate genders from the German UD treebank, and extracted morphological paradigms from Wiktionary to obtain case-marked forms, retaining only nouns unambiguously marking the two cases (4,509 nouns). We created four conditions, varying the amount of intervening material, as in the gender agreement experiment (4,509 stimuli per condition). For 81.3% of the nouns, at least one of the two forms was OOV for the WordNLM, and we tested the latter on the full-coverage subset. Random baseline accuracy is 50%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "German",
"sec_num": "4.2.1"
},
{
"text": "Results are in Figure 1 (center) . Again, WordNLM has the best performance, but the LSTM CNLM is competitive as more elements intervene. Accuracy stays well above 80% even with three intervening words. The n-gram model performs well if there is no intervening material (again reflecting the obvious fact that article-noun sequences are frequent in the corpus), and at chance otherwise. The RNN CNLM accuracy is above chance with one and two intervening elements, but drops considerably with distance.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 32,
"text": "Figure 1 (center)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "German",
"sec_num": "4.2.1"
},
{
"text": "Prepositional case subcategorization German verbs and prepositions lexically specify their object's case. We study the preposition mit 'with', which selects a dative object. We focus on mit, as it unambiguously requires a dative object, and it is extremely frequent in the Wikipedia corpus we are using. To build the test set, we select objects whose head noun is a nominalized adjective, with regular, overtly marked case inflection. We use the same adjective pool as in the preceding experiments. We then select all sentences containing a mit prepositional phrase in the German Universal Dependencies treebank, subject to the constraints that (1) the head of the noun phrase governed by the preposition is not a pronoun (replacing such items with a nominal object often results in ungrammaticality), and (2) the governed noun phrase is continuous, in the sense that it is not interrupted by words that do not belong to it. 12 We obtained 1,629 such sentences. For each sentence, we remove the prepositional phrase and replace it by a phrase of the form 3mit with der the sehr very {rote, roten} red one where only the -en (dative) version of the adjective is compatible with the case requirement of the preposition (and the intervening material does not disambiguate case). We construct three conditions by varying the presence and number of adverbs (sehr 'very', sehr extrem 'very extremely', sehr extrem unglaublich 'very extremely incredibly').",
"cite_spans": [
{
"start": 925,
"end": 927,
"text": "12",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "German",
"sec_num": "4.2.1"
},
{
"text": "Note that here the correct form is longer than the wrong one. As the overall likelihood is the product of character probabilities ranging between 0 and 1, if this introduces a length bias, the latter will work against the character models. Note also that we embed test phrases into full sentences (e.g., Die Figur hat mit der roten gespielt und meistens gewonnen. 'The figure played with the red one and mostly won'). We do this because this will disambiguate the final element of the phrase as a noun (not an adjective), and exclude the reading in which mit is a particle not governing the noun phrase of interest (Dudenredaktion, 2019). 13 When running the WordNLM, we excluded OOV adjectives as in the previous experiments, but did not apply further OOV filtering to the sentence frames. For the n-gram baseline, we only counted occurrences of the prepositional phrase, omitting the sentential contexts. Random baseline accuracy is 50%. We also created control stimuli where all words up to and including the preposition are removed (the example sentence above becomes: der roten gespielt und meistens gewonnen). If a model's accuracy is lower on these control stimuli than on the full ones, its performance cannot be simply explained by the different unigram probabilities of the two adjective forms.",
"cite_spans": [
{
"start": 639,
"end": 641,
"text": "13",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "German",
"sec_num": "4.2.1"
},
{
"text": "Results are shown in Figure 1 (right) . Only the n-gram baseline fails to outperform control accuracy (dotted) . Surprisingly, the LSTM CNLM slightly outperforms the WordNLM, even though the latter is evaluated on the easier full-lexicalcoverage stimulus subset. Neither model shows accuracy decay as the number of adverbs increases. As before, the n-gram model drops to chance as adverbs intervene, whereas the RNN CNLM starts with low accuracy that progressively decays below chance.",
"cite_spans": [
{
"start": 102,
"end": 110,
"text": "(dotted)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 21,
"end": 37,
"text": "Figure 1 (right)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "German",
"sec_num": "4.2.1"
},
{
"text": "Article-noun gender agreement Similar to German, Italian articles agree with the noun in gender; however, Italian has a relatively extended paradigm of masculine and feminine nouns differing only in the final vowel (-o and -a, respectively), allowing us to test agreement in fully controlled paradigms such as the following: The intervening adjective, ending in -e, does not cue gender. We constructed the stimuli with words appearing at least 100 times in the training corpus. We required moreover the -a and -o forms of a noun to be reasonably balanced in frequency (neither form is more than twice as frequent as the other), or both rather frequent (appear at least 500 times). As the prenominal adjectives are somewhat marked, we only considered -e adjectives that occur prenominally with at least 10 distinct nouns in the training corpus. Here and below, stimuli were manually checked, removing nonsensical adjective-noun (below, adverb-adjective) combinations. Finally, adjective-noun combinations that occurred in the training corpus were excluded, so that an ngram baseline would perform at chance level. We obtained 15,005 stimulus pairs in total. 35.8% of them contained an adjective or noun that was OOV for the WordNLM. Again, we report this Table 4 . WordNLM shows the strongest performance, closely followed by the LSTM CNLM. The RNN CNLM performs strongly above chance (50%), but again lags behind the LSTM.",
"cite_spans": [],
"ref_spans": [
{
"start": 1254,
"end": 1261,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Italian",
"sec_num": "4.2.2"
},
{
"text": "Article-adjective gender agreement We next consider agreement between articles and adjectives with an intervening adverb: 5a. il meno {alieno, aliena} the (m.) less alien one b. la meno {alieno, aliena} the (f.) less alien one where we used the adverbs pi\u00f9 'more', meno 'less', tanto 'so much'. We considered only adjectives that occurred 1K times in the training corpus (as adjectives ending in -a/-o are very common). We excluded all cases in which the adverbadjective combination occurred in the training corpus, obtaining 88 stimulus pairs. Because of the restriction to common adjectives, there were no WordNLM OOVs. Results are shown on the second line of Table 4 ; all three models perform almost perfectly. Possibly, the task is made easy by the use of extremely common adverbs and adjectives.",
"cite_spans": [],
"ref_spans": [
{
"start": 662,
"end": 669,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Italian",
"sec_num": "4.2.2"
},
{
"text": "Article-adjective number agreement Finally, we constructed a version of the last test that probed number agreement. For feminine forms, it is possible to compare same-length phrases such as: Stimulus selection was as in the last experiment, but we used a 500-occurrences threshold for adjectives, as feminine plurals are less common, obtaining 99 pairs. Again, no adverb-adjective combination was attested. There were no OOV items for the WordNLM. Results are shown on the third line of Table 4 ; the LSTMs perform almost perfectly, and the RNN is strongly above chance.",
"cite_spans": [],
"ref_spans": [
{
"start": 487,
"end": 494,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Italian",
"sec_num": "4.2.2"
},
{
"text": "We probe whether CNLMs are capable of tracking the shallow form of word-level semantics required in a fill-the-gap test. We turn now to English, as for this language we can use the Microsoft Research Sentence Completion task (Zweig and Burges, 2011) . The challenge consists of sentences with a gap, with 5 possible choices to fill it. Language models can be directly applied to the task, by calculating the likelihood of sentence variants with all possible completions, and selecting the one with the highest likelihood.",
"cite_spans": [
{
"start": 225,
"end": 249,
"text": "(Zweig and Burges, 2011)",
"ref_id": "BIBREF76"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics-driven sentence completion",
"sec_num": "4.3"
},
{
"text": "The creators of the benchmark took multiple precautions to ensure that success on the task implies some command of semantics. The multiple choices were controlled for frequency, and the annotators were encouraged to choose confounders whose elimination required ''semantic knowledge and logical inference'' (Zweig and Burges, 2011) . For example, the right choice in ''Was she his [client|musings|discomfiture|choice|opportunity], his friend, or his mistress? depends on the cue that the missing word is coordinated with friend and mistress, and the latter are animate entities.",
"cite_spans": [
{
"start": 307,
"end": 331,
"text": "(Zweig and Burges, 2011)",
"ref_id": "BIBREF76"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics-driven sentence completion",
"sec_num": "4.3"
},
{
"text": "The task domain (Sherlock Holmes novels) is very different from the Wikipedia dataset on which we originally trained our models. For a fairer comparison with previous work, we retrained our models on the corpus provided with the benchmark, consisting of 41 million words from 19th century English novels (we removed whitespace from this corpus as well).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics-driven sentence completion",
"sec_num": "4.3"
},
{
"text": "Results are in Table 5 . We confirm the importance of in-domain training, as the models trained on Wikipedia perform poorly (but still above chance level, which is at 20%). With indomain training, the LSTM CNLM outperforms many earlier word-level neural models, and is only slightly below our WordNLM. The RNN is not successful even when trained in-domain, (Mikolov, 2012) , Word RNN (Zweig et al., 2012) , Word LSTM and LdTreeLSTM (Zhang et al., 2016) . We further report models incorporating distributional encodings of semantics (right): Skipgram(+RNNs) from Mikolov et al. (2013a) , the PMI-based model of Woods (2016) , and the Context-Embedding-based approach of Melamud et al. (2016) .",
"cite_spans": [
{
"start": 357,
"end": 372,
"text": "(Mikolov, 2012)",
"ref_id": "BIBREF57"
},
{
"start": 384,
"end": 404,
"text": "(Zweig et al., 2012)",
"ref_id": "BIBREF77"
},
{
"start": 432,
"end": 452,
"text": "(Zhang et al., 2016)",
"ref_id": "BIBREF75"
},
{
"start": 562,
"end": 584,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF58"
},
{
"start": 610,
"end": 622,
"text": "Woods (2016)",
"ref_id": "BIBREF74"
},
{
"start": 669,
"end": 690,
"text": "Melamud et al. (2016)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Semantics-driven sentence completion",
"sec_num": "4.3"
},
{
"text": "contrasting with the word-based vanilla RNN from the literature, whose performance, while still below LSTMs, is much stronger. Once more, this suggests that capturing word-level generalizations with a word-lexicon-less character model requires the long-span processing abilities of an LSTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics-driven sentence completion",
"sec_num": "4.3"
},
{
"text": "The good performance of CNLMs on most tasks above suggests that, although they lack a hardcoded word vocabulary and they were trained on unsegmented input, there is enough pressure from the language modeling task for them to learn to track word-like items, and associate them with various morphological, syntactic, and semantic properties. In this section, we take a direct look at how CNLMs might be segmenting their input. Kementchedjhieva and Lopez (2018) found a single unit in their English CNLM that seems, qualitatively, to be tracking morpheme/word boundaries. Because they trained the model with whitespace, the main function of this unit could simply be to predict the very frequent whitespace character. We conjecture instead (like them) that the ability to segment the input into meaningful items is so important when processing language that CNLMs will specialize units for boundary tracking even when trained without whitespace. To look for ''boundary units,'' we created a random set of 10,000 positions from the training set, balanced between those corresponding to a word-final character and those occurring wordinitially or word-medially. We then computed, for each hidden unit, the Pearson correlation between its activations and a binary variable that takes value 1 in word-final position and 0 elsewhere. For each language and model (LSTM or RNN), we found very few units with a high correlation score, suggesting that the models have indeed specialized units for boundary tracking. We further study the units with the highest correlations, which are, for the LSTMs, 0.58 (English), 0.69 (German), and 0.57 (Italian). For the RNNs, the highest correlations are 0.40 (English), and 0.46 (German and Italian). 14 Examples We looked at the behavior of the selected LSTM units qualitatively by extracting random sets of 40-character strings from the development partition of each language (leftaligned with word onsets) and plotting the corresponding boundary unit activations. Figure 2 reports illustrative examples. In all languages, most peaks in activation mark word boundaries. However, other interesting patterns emerge. In English, we see how the unit reasonably treats co-and produced in co-produced as separate elements, and it also posits a weaker boundary after the prefix pro-. As it proceeds left-to-right, with no information on what follows, the network posits a boundary after but in Buttrich. In the German example, we observe how the complex word Hauptaufgabe ('main task') is segmented into the morphemes haupt, auf and gabe. Similarly, in the final transformati-fragment, we observe a weak boundary after the prefix trans. In the pronoun deren 'whose', the case suffix -n is separated. In Italian, in seguito a is a lexicalized multi-word sequence meaning 'following' (literally: 'in continuation to'). The boundary unit does not spike inside it. Similarly, the fixed expression Sommo Pontefice (referring to the Pope) does not trigger inner boundary unit activation spikes. On the other hand, we notice peaks after di and mi in dimissioni. Again, in left-to-right processing, the unit has a tendency to immediately posit boundaries when frequent function words are encountered.",
"cite_spans": [
{
"start": 425,
"end": 458,
"text": "Kementchedjhieva and Lopez (2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 1995,
"end": 2003,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Boundary tracking in CNLMs",
"sec_num": "4.4"
},
{
"text": "To gain a more quantitative understanding of how well the boundary unit is tracking word boundaries, we trained a single-parameter diagnostic classifier on the activation of the unit (the classifier simply sets an optimal threshold on the unit activation Table 6 : F1 of single-unit and full-hidden-state word-boundary diagnostic classifiers, trained and tested on uncontrolled running text.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Detecting word boundaries",
"sec_num": null
},
{
"text": "to separate word boundaries from word-internal positions). We ran two experiments. In the first, following standard practice, we trained and tested the classifier on uncontrolled running text. We used 1k characters for training, 1M for testing, both taken from the left-out Wikipedia test partitions. We will report F1 performance on this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting word boundaries",
"sec_num": null
},
{
"text": "We also considered a more cogent evaluation regime, in which we split training and test data so that the number of boundary and non-boundary conditions are balanced, and there is no overlap between training and test words. Specifically, we randomly selected positions from the test partitions of the Wikipedia corpus, such that half of these were the last character of a token, and the other half were not. We sampled the test data points subject to the constraint that the word (in the case of a boundary position) or word prefix (in the case of a word-internal position) ending at the selected character does not overlap with the training set. This ensures that a classifier cannot succeed by looking for encodings reflecting specific words. For each datapoint, we fed a substring of the 40 preceding characters to the CNLM. We collected 1,000 such points for training, and tested on 1M additional datapoints. In this case, we will report classification accuracy as figure of merit. For reference, in both experiments we also trained diagnostic classifiers on the full hidden layer of the LSTMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting word boundaries",
"sec_num": null
},
{
"text": "Looking at the F1 results on uncontrolled running text (Table 6 ), we observe first that the LSTM-based full-hidden-layer classifier has strong performance in all 3 languages, confirming that the LSTM model encodes boundary information. Moreover, in all languages, a large proportion of this performance is already accounted for by the single-parameter classifier using boundary unit activations. This confirms that tracking boundaries is important enough for the network to devote a specialized unit to this task. Full-layer RNN results Results for the balanced classifiers tested on new-word generalization are shown in Table 7 (because of the different nature of the experiments, these are not directly comparable to the F1 results in Table 6 ). Again, we observe a strong performance of the LSTM-based full-hiddenlayer classifier across the board. The LSTM single-parameter classifier using boundary unit activations is also strong, even outperforming the full classifier in German. Moreover, in this more cogent setup, the single-unit LSTM classifier is at least competitive with the full-layer RNN classifier in all languages. The weaker results of RNNs in the word-centric tasks of the previous sections might in part be due to their poorer overall ability to track word boundaries, as specifically suggested by this stricter evaluation setup.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 63,
"text": "(Table 6",
"ref_id": null
},
{
"start": 622,
"end": 629,
"text": "Table 7",
"ref_id": "TABREF11"
},
{
"start": 738,
"end": 745,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Detecting word boundaries",
"sec_num": null
},
{
"text": "Error analysis As a final way to characterize the function and behaviour of the boundary units, we inspected the most frequent under-and oversegmentation errors made by the classifier based on the single boundary units, in the more difficult balanced task. We discuss German here, as it is the language where the classifier reaches highest accuracy, and its tendency to have long, morphologically complex words makes it particularly interesting. However, similar patterns were also detected in Italian and, to a lesser extent, English (in the latter, there are fewer and less interpretable common oversegmentations, probably because words are on average shorter and morphology more limited).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting word boundaries",
"sec_num": null
},
{
"text": "Considering first the 30 most common undersegmentations, the large majority (24 of 30) are Size 128 512 128 256 256 256 128 128 128 Embedding Size 200 100 200 200 50 50 1024 200 200 Dimension 1024 1024 1024 2048 2048 2048 1024 1024 1024 Layers 3 2 2 2 2 2 2 2 common sequences of grammatical terms or very frequent items that can sometimes be reasonably re-analyzed as single function words or adverbs (e.g., bis zu, 'up to' (lit. 'until to'), je nach 'depending on' (lit. 'per after'), bis heute 'to date' (lit. 'until today')). Three cases are multiword city names (Los Angeles). The final 3 cases interestingly involve Bau 'building' followed by von 'of' or genitive determiners der/des. In its eventive reading, this noun requires a patient licensed by either a preposition or the genitive determiner (e.g., Bau der Mauer 'building of the wall' (lit. 'building the-GEN wall')). Apparently the model decided to absorb the case assigner into the form of the noun. We looked next at the 30 most common oversegmentations, that is, at the substrings that were wrongly segmented out of the largest number of distinct words. We limited the analysis to those containing at least 3 characters, because shorter strings were ambiguous and hard to interpret. Among then top oversegmentations, 6 are prefixes that can also occur in isolation as prepositions or verb particles (auf 'on', nach 'after', etc.). Seven are content words that form many compounds (e.g., haupt 'main', occurring in Hauptstadt 'capital', Hauptbahnhof 'main station'; Land 'land', occurring in Deutschland 'Germany', Landkreis 'district'). Another 7 items can be classified as suffixes (e.g., -lich as in s\u00fcdlich 'southern', wissenschaftlich 'scientific'), although their segmentation is not always canonical (e.g., -chaft instead of the expected -schaft in Wissenschaft 'science'). Four very common function words are often wrongly segmented out of longer words (e.g., sie 'she' from sieben 'seven'). The kom and kon cases are interesting, as the model segments them as stems (or stem fragments) in forms of the verbs kommen 'to come' and k\u00f6nnen 'to be able to', respectively (e.g., kommt and konnte), but it also treats them as pseudoaffixes elsewhere (komponist 'composer', kontakt 'contact'). The remaining 3 oversegmentations, rie, run and ter don't have any clear interpretation.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 290,
"text": "Size 128 512 128 256 256 256 128 128 128 Embedding Size 200 100 200 200 50 50 1024 200 200 Dimension 1024 1024 1024 2048 2048 2048 1024 1024 1024 Layers 3 2 2 2 2 2 2 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Detecting word boundaries",
"sec_num": null
},
{
"text": "To conclude, the boundary unit, even when analyzed through the lens of a classifier that was optimized on word-level segmentation, is actually tracking salient linguistic boundaries at different levels. Although in many cases these boundaries naturally coincide with words (hence the high classifier performance), the CNLM is also sensitive to frequent morphemes and compound elements, as well as to different types of multiword expressions. This is in line with a view of wordhood as a useful but ''soft'', emergent property, rather than a rigid primitive of linguistic processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting word boundaries",
"sec_num": null
},
{
"text": "We probed the linguistic information induced by a character-level LSTM language model trained on unsegmented text. The model was found to possess implicit knowledge about a range of intuitively word-mediated phenomena, such as sensitivity to lexical categories and syntactic and shallow-semantics dependencies. A model initialized with a word vocabulary and fed tokenized input was in general superior, but the performance of the word-less model did not lag much behind, suggesting that word priors are helpful but not strictly required. A character-level RNN was less consistent than the LSTM, suggesting that the latter's ability to track information across longer time spans is important to make the correct generalizations. The character-level models consistently outperformed n-gram controls, confirming they are tapping into more abstract patterns than local co-occurrence statistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "As a first step towards understanding how character-level models handle supra-character phenomena, we searched and found specialized boundary-tracking units in them. These units are not only and not always sensitive to word boundaries, but also respond to other salient items, such as morphemes and multi-word expressions, in accordance with an ''emergent'' and flexible view of the basic constituents of language (Schiering et al., 2010) .",
"cite_spans": [
{
"start": 414,
"end": 438,
"text": "(Schiering et al., 2010)",
"ref_id": "BIBREF65"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Our results are preliminary in many ways. Our tests are relatively simple. We did not attempt, for example, to model long-distance agreement in presence of distractors, a challenging task even for humans (Gulordava et al., 2018) . The results on number classification in German suggest that the models might not be capturing linguistic generalizations of the correct degree of abstractness, settling for shallower heuristics. Still, as a whole, our work suggests that a large corpus, combined with the weak priors encoded in an LSTM, might suffice to learn generalizations about word-mediated linguistic processes without a hard-coded word lexicon or explicit wordhood cues.",
"cite_spans": [
{
"start": 204,
"end": 228,
"text": "(Gulordava et al., 2018)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Nearly all contemporary linguistics recognizes a central role to the lexicon (see, e.g., Sag et al., 2003; Goldberg, 2005; Radford, 2006; Bresnan et al., 2016; Je\u017eek, 2016 , for very different perspectives). Linguistic formalisms assume that the lexicon is essentially a dictionary of words, possibly complemented by other units, not unlike the list of words and associated embeddings in a standard word-based NLM. Intriguingly, our CNLMs captured a range of lexical phenomena without anything resembling a word dictionary. Any information a CNLM might acquire about units larger than characters must be stored in its recurrent weights. This suggests a radically different and possibly more neurally plausible view of the lexicon as implicitly encoded in a distributed memory, that we intend to characterize more precisely and test in future work (similar ideas are being explored in a more applied NLP perspective, e.g., Gillick et al., 2016; Lee et al., 2017; Cherry et al., 2018 ).",
"cite_spans": [
{
"start": 89,
"end": 106,
"text": "Sag et al., 2003;",
"ref_id": "BIBREF64"
},
{
"start": 107,
"end": 122,
"text": "Goldberg, 2005;",
"ref_id": "BIBREF28"
},
{
"start": 123,
"end": 137,
"text": "Radford, 2006;",
"ref_id": "BIBREF63"
},
{
"start": 138,
"end": 159,
"text": "Bresnan et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 160,
"end": 171,
"text": "Je\u017eek, 2016",
"ref_id": "BIBREF38"
},
{
"start": 922,
"end": 943,
"text": "Gillick et al., 2016;",
"ref_id": "BIBREF26"
},
{
"start": 944,
"end": 961,
"text": "Lee et al., 2017;",
"ref_id": "BIBREF48"
},
{
"start": 962,
"end": 981,
"text": "Cherry et al., 2018",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Concerning the model input, we would like to study whether the CNLM successes crucially depend on the huge amount of training data it receives. Are word priors more important when learning from smaller corpora? In terms of comparison with human learning, the Wikipedia text we fed our CNLMs is far from what children acquiring a language would hear. Future work should explore character/phoneme-level learning from child-directed speech corpora. Still, by feeding our networks ''grown-up'' prose, we are arguably making the job of identifying basic constituents harder than it might be when processing the simpler utterances of early childdirected speech (Tomasello, 2003) .",
"cite_spans": [
{
"start": 655,
"end": 672,
"text": "(Tomasello, 2003)",
"ref_id": "BIBREF72"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "As discussed, a rigid word notion is problematic both cross-linguistically (cf. polysynthetic and agglutinative languages) and within single linguistic systems (cf. the view that the lexicon hosts units at different levels of the linguistic hierarchy, from morphemes to large syntactic constructions; e.g., Jackendoff, 1997; Croft and Cruse, 2004; Goldberg, 2005) . This study provided a necessary initial check that word-free models can account for phenomena traditionally seen as word-based. Future work should test whether such models can also account for grammatical patterns that are harder to capture in word-based formalisms, exploring both a typologically wider range of languages and a broader set of grammatical tests.",
"cite_spans": [
{
"start": 307,
"end": 324,
"text": "Jackendoff, 1997;",
"ref_id": "BIBREF36"
},
{
"start": 325,
"end": 347,
"text": "Croft and Cruse, 2004;",
"ref_id": "BIBREF17"
},
{
"start": 348,
"end": 363,
"text": "Goldberg, 2005)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We do not erase punctuation marks, reasoning that they have a similar function to prosodic cues in spoken language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our input data, test sets, and pre-trained models are available at https://github.com/m-hahn/ tabula-rasa-rnns.3 Single-word utterances are not uncommon in childdirected language, but they are still rather the exception than the rule, and many important words, such as determiners, never occur in isolation(Christiansen et al., 2005).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/attardi/wikiextractor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Adjectives ending in -r often reflect lemmatization problems, as TreeTagger occasionally failed to remove the inflectional suffix -r when lemmatizing. We needed to extract lemmas, as we constructed the appropriate inflected forms on their basis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The main source of noun phrase discontinuity in the German UD corpus is extraposition, a common phenomenon where part of the noun phrase is separated from the rest by the verb.13 An example of this unintended reading of mit is: Ich war mit der erste, der hier war. 'I was one of the first who arrived here.' In this context, dative ersten would be ungrammatical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In an early version of this analysis, we arbitrarily imposed a minimum 0.70 correlation threshold, missing the presence of these units. We thank the reviewer who encouraged us to look further into the matter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Piotr Bojanowski, Alex Cristia, Kristina Gulordava, Urvashi Khandelwal, Germ\u00e1n Kruszewski, Sebastian Riedel, Hinrich Sch\u00fctze, and the anonymous reviewers for feedback and advice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Finegrained analysis of sentence embeddings using auxiliary prediction tasks",
"authors": [
{
"first": "Yossi",
"middle": [],
"last": "Adi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Kermany",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Lavi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR Conference Track",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine- grained analysis of sentence embeddings using auxiliary prediction tasks. In Proceedings of ICLR Conference Track. Toulon, France. Published online: https://openreview. net/group?id=ICLR.cc/2017/conference",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Encoding of phonology in a recurrent neural model of grounded speech",
"authors": [
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Barking",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "368--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Afra Alishahi, Marie Barking, and Grzegorz Chrupa\u0142a. 2017. Encoding of phonology in a recurrent neural model of grounded speech. In Proceedings of CoNLL, pages 368-378, Vancouver.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The proactive brain: Using analogies and associations to generate predictions",
"authors": [
{
"first": "Moshe",
"middle": [],
"last": "Bar",
"suffix": ""
}
],
"year": 2007,
"venue": "Trends in Cognitive Science",
"volume": "11",
"issue": "7",
"pages": "280--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moshe Bar. 2007. The proactive brain: Using anal- ogies and associations to generate predictions. Trends in Cognitive Science, 11(7):280-289.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "What do neural machine translation models learn about morphology?",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "861--872",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of ACL, pages 861-872, Vancouver.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The 'word' in polysynthetic languages: Phonological and syntactic challenges",
"authors": [
{
"first": "Balthasar",
"middle": [],
"last": "Bickel",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Z\u00fa\u00f1iga",
"suffix": ""
}
],
"year": 2017,
"venue": "Oxford Handbook of Polysynthesis",
"volume": "",
"issue": "",
"pages": "158--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Balthasar Bickel and Fernando Z\u00fa\u00f1iga. 2017. The 'word' in polysynthetic languages: Phonol- ogical and syntactic challenges, In Michael Fortescue, Marianne Mithun, and Nicholas Evans, editors, Oxford Handbook of Polysynthe- sis, pages 158-186. Oxford University Press, Oxford.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Alternative structures for character-level RNNs",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ICLR Workshop Track",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Armand Joulin, and Tomas Mikolov. 2016. Alternative structures for character-level RNNs. In Proceedings of ICLR Workshop Track. San Juan, Puerto Rico. Published online: https://openreview. net/group?id=ICLR.cc/2016/workshop.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The TIGER treebank",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Dipper",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Lezius",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Workshop on Treebanks and Linguistic Theories",
"volume": "168",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolfgang Lezius, and George Smith. 2002. The TIGER treebank. In Proceedings of the Workshop on Treebanks and Linguistic Theories, volume 168.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Distributional regularity and phonotactic constraints are useful for segmentation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Brent",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Cartwright",
"suffix": ""
}
],
"year": 1996,
"venue": "Cognition",
"volume": "61",
"issue": "",
"pages": "93--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Brent and Timothy Cartwright. 1996. Distributional regularity and phonotactic constraints are useful for segmentation. Cognition, 61:93-125.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Lexical-Functional Syntax",
"authors": [
{
"first": "Joan",
"middle": [],
"last": "Bresnan",
"suffix": ""
},
{
"first": "Ash",
"middle": [],
"last": "Asudeh",
"suffix": ""
},
{
"first": "Ida",
"middle": [],
"last": "Toivonen",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Wechsler",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joan Bresnan, Ash Asudeh, Ida Toivonen, and Stephen Wechsler. 2016. Lexical-Functional Syntax, 2nd ed., Blackwell, Malden, MA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The lexical integrity principle: Evidence from Bantu. Natural Language and Linguistic Theory",
"authors": [
{
"first": "Joan",
"middle": [],
"last": "Bresnan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Mchombo",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "181--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joan Bresnan and Sam Mchombo. 1995. The lexical integrity principle: Evidence from Bantu. Natural Language and Linguistic Theory, 181-254.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Revisiting character-based neural machine translation with capacity and compression",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.09943"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Cherry, George Foster, Ankur Bapna, Orhan Firat, and Wolfgang Macherey. 2018. Revisiting character-based neural machine translation with capacity and compression. arXiv preprint arXiv:1808.09943.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Remarks on nominalization",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1970,
"venue": "Readings in English Transformational Grammar",
"volume": "",
"issue": "",
"pages": "184--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky. 1970, Remarks on nominaliza- tion, In Roderick Jacobs and Peter Rosenbaum, editors, Readings in English Transformational Grammar, pages 184-221. Ginn, Waltham, MA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multiple-cue integration in language acquisition: A connectionist model of speech segmentation and rule-like behavior",
"authors": [
{
"first": "Morten",
"middle": [],
"last": "Christiansen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Conway",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Curtin",
"suffix": ""
}
],
"year": 2005,
"venue": "Language Acquisition, Change and Emergence: Essays in Evolutionary Linguistics",
"volume": "",
"issue": "",
"pages": "205--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morten Christiansen, Christopher Conway, and Suzanne Curtin. 2005. Multiple-cue integration in language acquisition: A connectionist model of speech segmentation and rule-like behav- ior, James Minett and William Wang, editors, Language Acquisition, Change and Emer- gence: Essays in Evolutionary Linguistics, pages 205-249. City University of Hong Kong Press, Hong Kong.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning to segment speech using multiple cues: A connectionist model",
"authors": [
{
"first": "Morten",
"middle": [],
"last": "Christiansen",
"suffix": ""
},
{
"first": "Allen",
"middle": [],
"last": "Joseh",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Seidenberg",
"suffix": ""
}
],
"year": 1998,
"venue": "Language and Cognitive Processes",
"volume": "13",
"issue": "2/3",
"pages": "221--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morten Christiansen, Allen Joseh, and Mark Seidenberg. 1998. Learning to segment speech using multiple cues: A connectionist model. Language and Cognitive Processes, 13(2/3):221-268.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Surfing Uncertainty",
"authors": [
{
"first": "Andy",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andy Clark. 2016. Surfing Uncertainty, Oxford University Press, Oxford.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "2126--2136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Germ\u00e1n Kruszewski, Guillaume Lample, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of ACL, pages 2126-2136, Melbourne.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Are all languages equally hard to language-model?",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "536--541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Sebastian J. Mielke, Jason Eisner, and Brian Roark. 2018. Are all languages equally hard to language-model? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 536-541.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Cognitive Linguistics",
"authors": [
{
"first": "William",
"middle": [],
"last": "Croft",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Cruse",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Croft and Alan Cruse. 2004. Cognitive Linguistics, Cambridge University Press, Cambridge.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "On the Definition of Word",
"authors": [
{
"first": "Anna-Maria Di",
"middle": [],
"last": "Sciullo",
"suffix": ""
},
{
"first": "Edwin",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna-Maria Di Sciullo and Edwin Williams. 1987. On the Definition of Word, MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Word: A cross-linguistic typology",
"authors": [],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Dixon and Alexandra Aikhenvald, editors. 2002. Word: A cross-linguistic typology, Cambridge University Press, Cambridge.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Duden online",
"authors": [],
"year": 2019,
"venue": "mit (Adverb)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dudenredaktion. 2019, mit (Adverb), Duden online. https://www.duden.de/node/ 152710/revision/152746, retrieved June 3, 2019.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Finding structure in time",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Elman",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive Science",
"volume": "14",
"issue": "",
"pages": "179--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Elman. 1990. Finding structure in time. Cognitive Science, 14:179-211.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Assessing composition in sentence vector representations",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgohary",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "1790--1801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sentence vector representations. In Proceedings of COLING, pages 1790-1801. Santa Fe, NM.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The acquisition of anaphora by simple recurrent networks",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Mathis",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Badecker",
"suffix": ""
}
],
"year": 2013,
"venue": "Language Acquisition",
"volume": "20",
"issue": "3",
"pages": "181--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Frank, Donald Mathis, and William Badecker. 2013. The acquisition of anaphora by simple recurrent networks. Language Acquisition, 20(3):181-227.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Why neurons mix: High dimensionality for higher cognition",
"authors": [
{
"first": "Stefano",
"middle": [],
"last": "Fusi",
"suffix": ""
},
{
"first": "Earl",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Mattia",
"middle": [],
"last": "Rigotti",
"suffix": ""
}
],
"year": 2016,
"venue": "Current Opinion in Neurobiology",
"volume": "37",
"issue": "",
"pages": "66--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefano Fusi, Earl Miller, and Mattia Rigotti. 2016. Why neurons mix: High dimensionality for higher cognition. Current Opinion in Neurobiology, 37:66-74.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Language modeling for morphologically rich languages: Character-aware modeling for word-level prediction",
"authors": [
{
"first": "Daniela",
"middle": [],
"last": "Gerz",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Edoardo",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Ponti",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "451--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniela Gerz, Ivan Vuli\u0107, Edoardo Maria Ponti, Jason Naradowsky, Roi Reichart, and Anna Korhonen. 2018. Language modeling for mor- phologically rich languages: Character-aware modeling for word-level prediction. Transac- tions of the Association for Computational Linguistics, 6:451-465.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Multilingual language processing from bytes",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Cliff",
"middle": [],
"last": "Brunk",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1296--1306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilingual language processing from bytes. In Proceedings of NAACL-HLT, pages 1296-1306.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Explaining character-aware neural networks for word-level prediction: Do they discover linguistic rules?",
"authors": [
{
"first": "Fr\u00e9deric",
"middle": [],
"last": "Godin",
"suffix": ""
},
{
"first": "Kris",
"middle": [],
"last": "Demuynck",
"suffix": ""
},
{
"first": "Joni",
"middle": [],
"last": "Dambre",
"suffix": ""
},
{
"first": "Wesley",
"middle": [],
"last": "De Neve",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Demeester",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fr\u00e9deric Godin, Kris Demuynck, Joni Dambre, Wesley De Neve, and Thomas Demeester. 2018. Explaining character-aware neural networks for word-level prediction: Do they discover linguistic rules? In Proceedings of EMNLP. Brussels.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Constructions at Work: The Nature of Generalization in Language",
"authors": [
{
"first": "Adele",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adele Goldberg. 2005. Constructions at Work: The Nature of Generalization in Language, Oxford University Press, Oxford.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Neural Network Methods for Natural Language Processing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg. 2017. Neural Network Methods for Natural Language Processing, Morgan & Claypool, San Francisco, CA.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A Bayesian framework for word segmentation: Exploring the effects of context",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2009,
"venue": "Cognition",
"volume": "112",
"issue": "1",
"pages": "21--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2009. A Bayesian framework for word segmentation: Exploring the effects of context. Cognition, 112(1):21-54.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Generating sequences with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2014. Generating sequences with re- current neural networks. CoRR, abs/1308.0850v5.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Colorless green recurrent networks dream hierarchically",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Gulordava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "1195--1205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of NAACL, pages 1195-1205. New Orleans, LA.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The indeterminacy of word segmentation and the nature of morphology and syntax",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Haspelmath",
"suffix": ""
}
],
"year": 2011,
"venue": "Folia Linguistica",
"volume": "45",
"issue": "1",
"pages": "31--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Haspelmath. 2011. The indeterminacy of word segmentation and the nature of morphology and syntax. Folia Linguistica, 45(1):31-80.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Visualisation and ''diagnostic classifiers'' reveal how recurrent and recursive neural networks process hierarchical structure",
"authors": [
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Veldhoen",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Artificial Intelligence Research",
"volume": "61",
"issue": "",
"pages": "907--926",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and ''diagnostic classifiers'' reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Twistin' the night away. Language",
"authors": [
{
"first": "Ray",
"middle": [],
"last": "Jackendoff",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "73",
"issue": "",
"pages": "534--559",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ray Jackendoff. 1997. Twistin' the night away. Language, 73:534-559.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Foundations of Language: Brain, Meaning, Grammar, Evolution",
"authors": [
{
"first": "Ray",
"middle": [],
"last": "Jackendoff",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ray Jackendoff. 2002. Foundations of Language: Brain, Meaning, Grammar, Evolution, Oxford University Press, Oxford.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "The Lexicon: An Introduction",
"authors": [
{
"first": "Elisabetta",
"middle": [],
"last": "Je\u017eek",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabetta Je\u017eek. 2016. The Lexicon: An Introduction, Oxford University Press, Oxford.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Representation of linguistic form and function in recurrent neural networks",
"authors": [
{
"first": "Akos",
"middle": [],
"last": "K\u00e0d\u00e0r",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "4",
"pages": "761--780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akos K\u00e0d\u00e0r, Grzegorz Chrupa\u0142a, and Afra Alishahi. 2017. Representation of linguistic form and function in recurrent neural networks. Computational Linguistics, 43(4):761-780.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Unsupervised word segmentation and lexicon discovery using acoustic word embeddings",
"authors": [
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Aren",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Transactions on Audio, Speech and Language Processing",
"volume": "24",
"issue": "4",
"pages": "669--679",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herman Kamper, Aren Jansen, and Sharon Goldwater. 2016. Unsupervised word segmen- tation and lexicon discovery using acoustic word embeddings. IEEE Transactions on Audio, Speech and Language Processing, 24(4):669-679.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Neural morphological analysis: Encoding-decoding canonical segments",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "961--967",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann, Ryan Cotterell, and Hinrich Sch\u00fctze. 2016. Neural morphological analysis: Encoding-decoding canonical segments. In Proceedings of EMNLP, pages 961-967.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Indicatements'' that character language models learn English morpho-syntactic units and regularities",
"authors": [
{
"first": "Yova",
"middle": [],
"last": "Kementchedjhieva",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the EMNLP BlackboxNLP Workshop",
"volume": "",
"issue": "",
"pages": "145--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yova Kementchedjhieva and Adam Lopez. 2018. ''Indicatements'' that character language models learn English morpho-syntactic units and regularities. In Proceedings of the EMNLP BlackboxNLP Workshop, pages 145-153.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Character-aware neural language models",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "2741--2749",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexander Rush. 2016. Character-aware neural language models. In Proceedings of AAAI, pages 2741-2749, Phoenix, AZ.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Recurrent neural networks in linguistic theory: Revisiting Pinker and Prince (1988) and the past tense debate",
"authors": [
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1807.04783v2"
]
},
"num": null,
"urls": [],
"raw_text": "Christo Kirov and Ryan Cotterell. 2018. Recurrent neural networks in linguistic theory: Revisiting Pinker and Prince (1988) and the past tense debate. Transactions of the Association for Computational Linguistics. arXiv preprint arXiv:1807.04783v2.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Early language acquisition: Cracking the speech code",
"authors": [
{
"first": "Patricia",
"middle": [],
"last": "Kuhl",
"suffix": ""
}
],
"year": 2004,
"venue": "Nature Reviews Neuroscience",
"volume": "5",
"issue": "11",
"pages": "831--843",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patricia Kuhl. 2004. Early language acquisition: Cracking the speech code. Nature Reviews Neuroscience, 5(11):831-843.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Grammaticality, acceptability, and probability: A probabilistic view of linguistic knowledge",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": 2017,
"venue": "Cognitive Science",
"volume": "41",
"issue": "5",
"pages": "1202--1241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, Alexander Clark, and Shalom Lappin. 2017. Grammaticality, acceptability, and proba- bility: A probabilistic view of linguistic knowl- edge. Cognitive Science, 41(5):1202-1241.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Fully character-level neural machine translation without explicit segmentation",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "365--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without explicit segmen- tation. Transactions of the Association for Computational Linguistics, 5:365-378.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Visualizing and understanding neural models in NLP",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "681--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of NAACL, pages 681-691, San Diego, CA.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Proceedings of the EMNLP BlackboxNLP Workshop, ACL",
"authors": [],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Grzegorz Chrupa\u0142a, and Afra Alishahi, editors. 2018. In Proceedings of the EMNLP BlackboxNLP Workshop, ACL, Brussels.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "521--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Trans- actions of the Association for Computational Linguistics, 4:521-535.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Infant sensitivity to distributional information can affect phonetic discrimination",
"authors": [
{
"first": "Jessica",
"middle": [],
"last": "Maye",
"suffix": ""
},
{
"first": "Janet",
"middle": [],
"last": "Werker",
"suffix": ""
},
{
"first": "Lou",
"middle": [
"Ann"
],
"last": "Gerken",
"suffix": ""
}
],
"year": 2002,
"venue": "Cognition",
"volume": "82",
"issue": "3",
"pages": "101--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jessica Maye, Janet Werker, and Lou Ann Gerken. 2002. Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82(3):B101-B111.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Revisiting the poverty of the stimulus: Hierarchical generalization without a hierarchical bias in recurrent neural networks",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of CogSci",
"volume": "",
"issue": "",
"pages": "2093--2098",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas McCoy, Robert Frank, and Tal Linzen. 2018. Revisiting the poverty of the stimulus: Hierarchical generalization without a hierarchical bias in recurrent neural networks. In Proceedings of CogSci, pages 2093-2098, Madison, WI.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Universal dependency annotation for multilingual parsing",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Yvonne",
"middle": [],
"last": "Quirmbach-Brundage",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "92--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T\u00e4ckstr\u00f6m, et al. 2013. Universal dependency annotation for multilingual parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 92-97.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "context2vec: Learning generic context embedding with bidirectional lstm",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "51--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional lstm. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 51-61.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "An analysis of neural language modeling at multiple scales",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Shirish Keskar",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.08240"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. An analysis of neural language modeling at multiple scales. arXiv preprint arXiv:1803.08240.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Statistical Language Models Based on Neural Networks",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov. 2012. Statistical Language Mod- els Based on Neural Networks. Dissertation, Brno University of Technology.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Subword language modeling with neural networks",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Deoras",
"suffix": ""
},
{
"first": "Hai-Son",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Kombrink",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and Jan Cernock\u00fd. 2011. Subword language modeling with neural networks. http://www.fit. vutbr.cz/\u223cimikolov/rnnlm/.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of NAACL, pages 746-751, Atlanta, GA.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Generative linguistics and neural networks at 60: Foundation, friction, and fusion. Language",
"authors": [
{
"first": "Joe",
"middle": [],
"last": "Pater",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1353/lan.2019.0005"
]
},
"num": null,
"urls": [],
"raw_text": "Joe Pater. 2018. Generative linguistics and neural networks at 60: Foundation, friction, and fusion. Language. doi:10.1353/lan.2019.0005.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Learning to generate reviews and discovering sentiment",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "J\u00f3zefowicz",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Rafal J\u00f3zefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. CoRR, abs/1704.01444.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Minimalist syntax revisited",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Radford",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Radford. 2006. Minimalist syntax re- visited. http://www.public.asu.edu/ \u223cgelderen/Radford2009.pdf.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Syntactic Theory: A Formal Introduction",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Sag",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wasow",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Bender",
"suffix": ""
}
],
"year": 2003,
"venue": "CSLI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Sag, Thomas Wasow, and Emily Bender. 2003. Syntactic Theory: A Formal Introduction, CSLI, Stanford, CA.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "The prosodic word is not universal, but emergent",
"authors": [
{
"first": "Ren\u00e9",
"middle": [],
"last": "Schiering",
"suffix": ""
},
{
"first": "Balthasar",
"middle": [],
"last": "Bickel",
"suffix": ""
},
{
"first": "Kristine",
"middle": [],
"last": "Hildebrandt",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Linguistics",
"volume": "46",
"issue": "3",
"pages": "657--709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ren\u00e9 Schiering, Balthasar Bickel, and Kristine Hildebrandt. 2010. The prosodic word is not universal, but emergent. Journal of Linguistics, 46(3):657-709.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Improvements in part-ofspeech tagging with an application to german",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 1999,
"venue": "Natural Language Processing Using Very Large Corpora",
"volume": "",
"issue": "",
"pages": "13--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Schmid. 1999. Improvements in part-of- speech tagging with an application to german, In Natural Language Processing Using Very Large Corpora, pages 13-25. Springer.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Nonsymbolic text representation",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "785--796",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze. 2017. Nonsymbolic text repre- sentation. In Proceedings of EACL, pages 785-796. Valencia.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "How grammatical is character-level neural machine translation? assessing MT quality with contrastive translation pairs",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EACL (Short Papers)",
"volume": "",
"issue": "",
"pages": "376--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich. 2017. How grammatical is character-level neural machine translation? assessing MT quality with contrastive translation pairs. In Proceedings of EACL (Short Papers), pages 376-382, Valencia.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Does string-based neural MT learn source syntax?",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Inkit",
"middle": [],
"last": "Padhi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1526--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Proceedings of EMNLP, pages 1526-1534, Austin, TX.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Generating text with recurrent neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Martens",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "1017--1024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, James Martens, and Geoffrey Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of ICML, pages 1017-1024, Bellevue, WA.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104-3112.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Constructing a Language: A Usage-Based Theory of Language Acquisition",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Tomasello",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Tomasello. 2003. Constructing a Lan- guage: A Usage-Based Theory of Lan- guage Acquisition, Harvard University Press, Cambridge, MA.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "Dumping lexicalism",
"authors": [
{
"first": "Edwin",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2007,
"venue": "The Oxford Handbook of Linguistic Interfaces",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edwin Williams. 2007. Dumping lexicalism, In Gillian Ramchand and Charles Reiss, editors, The Oxford Handbook of Linguistic Interfaces, Oxford University Press, Oxford.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Exploiting linguistic features for sentence completion",
"authors": [
{
"first": "Aubrie",
"middle": [],
"last": "Woods",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "438--442",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aubrie Woods. 2016. Exploiting linguistic fea- tures for sentence completion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 438-442.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "Top-down tree long shortterm memory networks",
"authors": [
{
"first": "Xingxing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "310--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xingxing Zhang, Liang Lu, and Mirella Lapata. 2016. Top-down tree long short- term memory networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 310-320.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "The Microsoft Research sentence completion challenge",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Burges",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Zweig and Christopher Burges. 2011. The Microsoft Research sentence completion challenge, Technical Report MSR-TR-2011- 129, Microsoft Research.",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "Computational approaches to sentence completion",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Platt",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "Ainur",
"middle": [],
"last": "Burges",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Yessenalina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "601--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Zweig, John C. Platt, Christopher Meek, Christopher J. C. Burges, Ainur Yessenalina, and Qiang Liu. 2012. Computational approaches to sentence completion. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 601-610.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Accuracy in the German syntax tasks, as a function of number of intervening words."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "meno {aliena, aliene} the (s.) less alien one(s) b. le meno {aliena, aliene} the (p.) less alien one(s)"
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Examples of the LSTM CNLM boundary unit activation profile, with ground-truth word boundaries marked in green. English: It was co-produced with Martin Buttrich over at. . . . German: Systeme, deren Hauptaufgabe die transformati(-on) 'systems, whose main task is the transformation. . . '. Italian: in seguito alle dimissioni del Sommo Pontefice 'following the resignation of the Supreme Pontiff. . . '."
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td>: Performance of language models. For</td></tr><tr><td>CNLMs, we report bits-per-character (BPC). For</td></tr><tr><td>WordNLMs, we report perplexity.</td></tr><tr><td>(2014) for his static character-level LSTM trained</td></tr><tr><td>on space-delimited Wikipedia data, suggesting</td></tr><tr><td>that we are achieving reasonable performance.</td></tr><tr><td>The perplexity of the word-level model might not</td></tr><tr><td>be comparable to that of highly optimized state-</td></tr><tr><td>of-the-art architectures, but it is at the expected</td></tr><tr><td>level for a well-tuned vanilla LSTM language</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"html": null,
"num": null,
"text": "Accuracy of diagnostic classifier on predicting word class, with standard errors across 100 random train-test splits. 'subs.' marks invocabulary subset evaluation, not comparable with the other results.",
"content": "<table/>"
},
"TABREF4": {
"type_str": "table",
"html": null,
"num": null,
"text": "German number classification accuracy, with standard errors computed from 200 random train-test splits. 'subs.' marks in-vocabulary subset evaluation, not comparable to the other results.",
"content": "<table/>"
},
"TABREF6": {
"type_str": "table",
"html": null,
"num": null,
"text": "Italian agreement results. Random baseline accuracy is 50% in all three experiments.",
"content": "<table><tr><td>model's results on its full-coverage subset, where</td></tr><tr><td>the CNLM performance is only slightly above the</td></tr><tr><td>one reported.</td></tr><tr><td>Results are shown on the first line of</td></tr></table>"
},
"TABREF8": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td>: Results on MSR Sentence Completion. For</td></tr><tr><td>our models (top), we show accuracies for Wikipedia</td></tr><tr><td>(left) and in-domain (right) training. We compare</td></tr><tr><td>with language models from prior work (left): Kneser-</td></tr><tr><td>Ney 5-gram model</td></tr></table>"
},
"TABREF11": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td>: Accuracy of single-unit and full-hidden-</td></tr><tr><td>state word-boundary diagnostic classifiers,</td></tr><tr><td>trained and tested on balanced data requiring</td></tr><tr><td>new-word generalization. Chance accuracy is at</td></tr><tr><td>50%.</td></tr><tr><td>are below LSTM-level but still strong. There is,</td></tr><tr><td>however, a stronger drop from full-layer to single-</td></tr><tr><td>unit classification. This is in line with the fact that,</td></tr><tr><td>as reported above, the candidate RNN boundary</td></tr><tr><td>units have lower boundary correlations than the</td></tr><tr><td>LSTM ones.</td></tr></table>"
},
"TABREF14": {
"type_str": "table",
"html": null,
"num": null,
"text": "Chosen hyperparameters.",
"content": "<table/>"
}
}
}
}