|
{ |
|
"paper_id": "Q19-1021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:09:28.474231Z" |
|
}, |
|
"title": "On the Complexity and Typology of Inflectional Morphological Systems", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Colorado", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Colorado", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Mans", |
|
"middle": [], |
|
"last": "Hulden", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Colorado", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Colorado", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We quantify the linguistic complexity of different languages' morphological systems. We verify that there is a statistically significant empirical trade-off between paradigm size and irregularity: A language's inflectional paradigms may be either large in size or highly irregular, but never both. We define a new measure of paradigm irregularity based on the conditional entropy of the surface realization of a paradigmhow hard it is to jointly predict all the word forms in a paradigm from the lemma. We estimate irregularity by training a predictive model. Our measurements are taken on large morphological paradigms from 36 typologically diverse languages.", |
|
"pdf_parse": { |
|
"paper_id": "Q19-1021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We quantify the linguistic complexity of different languages' morphological systems. We verify that there is a statistically significant empirical trade-off between paradigm size and irregularity: A language's inflectional paradigms may be either large in size or highly irregular, but never both. We define a new measure of paradigm irregularity based on the conditional entropy of the surface realization of a paradigmhow hard it is to jointly predict all the word forms in a paradigm from the lemma. We estimate irregularity by training a predictive model. Our measurements are taken on large morphological paradigms from 36 typologically diverse languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "What makes an inflectional system ''complex''? Linguists have sometimes considered measuring this by the size of the inflectional paradigms (McWhorter, 2001) . The number of distinct inflected forms of each word indicates the number of morphosyntactic distinctions that the language makes on the surface. However, this gives only a partial picture of complexity (Sagot, 2013) . Some inflectional systems are more irregular: It is harder to guess how the inflected forms of a word will be spelled or pronounced, given the base form. Ackerman and Malouf (2013) hypothesize that there is a limit to the irregularity of an inflectional system. We refine this hypothesis to propose that systems with many forms per paradigm have an even stricter limit on irregularity per distinct form. That is, the two dimensions interact: A system cannot be complex along both axes at once. In short, if a language demands that its speakers use a lot of distinct forms, those forms must be relatively predictable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 157, |
|
"text": "(McWhorter, 2001)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 375, |
|
"text": "(Sagot, 2013)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 532, |
|
"end": 558, |
|
"text": "Ackerman and Malouf (2013)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we develop information-theoretic tools to operationalize this hypothesis about the complexity of inflectional systems. We model each inflectional system using a tree-structured directed graphical model whose factors are neural networks and whose structure (topology) must be learned along with the factors. We explain our approach to quantifying two aspects of inflectional complexity and, in one case, approximate our metric using a simple variational bound. This allows a data-driven approach by which we can measure the morphological complexity of a given language in a clean manner that is more theoryagnostic than previous approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our study evaluates 36 diverse languages, using collections of paradigms represented orthographically. Thus, we are measuring the complexity of each written language. The corresponding spoken language would have different complexity, based on the corresponding phonological forms. Importantly, our method does not depend upon a linguistic analysis of words into constituent morphemes (e.g., hoping \u2192 hope+ing). We find support for the complexity trade-off hypothesis. Concretely, we show that the more unique forms an inflectional paradigm has, the more predictable the forms must be from one another-for example, forms in a predictable paradigm might all be related by a simple change of suffix. This intuition has a long history in the linguistics community, as field linguists have often noted that languages with extreme morphological richness, for example, agglutinative and polysynthetic languages, have virtually no exceptions or irregular forms. Our contribution lies in mathematically formulating this notion of regularity and providing a means to estimate it by fitting a probability model. Using these tools, we provide a quantitative verification of this conjecture on a large set of typologically diverse languages, which is significant with p < 0.037.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We adopt the framework of word-based morphology (Aronoff, 1976; Spencer, 1991) . An inflected lexicon in this framework is represented as a set of word types. Each word type is a triple of \u2022 a lexeme (an arbitrary integer or string that indexes the word's core meaning and part of speech)", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 63, |
|
"text": "(Aronoff, 1976;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 64, |
|
"end": 78, |
|
"text": "Spencer, 1991)", |
|
"ref_id": "BIBREF55" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Based Morphology", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 a slot \u03c3 (an arbitrary integer or object that indicates how the word is inflected)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Based Morphology", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 a surface form w (a string over a fixed phonological or orthographic alphabet \u03a3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Based Morphology", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A paradigm m is a map from slots to surface forms. 1 We use dot notation to access elements of this map. For example, m.past denotes the past-tense surface form in paradigm m.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 52, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Based Morphology", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "An inflected lexicon for a language can be regarded as defining a map M from lexemes to their paradigms. Specifically, M ( ).\u03c3 = w iff the lexicon contains the triple ( , \u03c3, w). 2 For example, in the case of the English lexicon, if is the English lexeme walk Verb , then M ( ).past = walked. In linguistic terms, we say that in 's paradigm M ( ), the past-tense slot is filled (or realized) by walked.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Based Morphology", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Nothing in our method requires a Bloomfieldian structuralist analysis that decomposes each word into underlying morphs; rather, this paper is a-morphous in the sense of Anderson (1992) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 184, |
|
"text": "Anderson (1992)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Based Morphology", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "More specifically, we will work within the UniMorph annotation scheme (Sylak-Glassman, 2016) . In the simplest case, each slot \u03c3 specifies a morphosyntactic bundle of inflectional features such as tense, mood, person, number, and gender. For example, the Spanish surface form pongas (from the lexeme poner 'to put') fills a slot that indicates that this word has the features [ TENSE= PRESENT, MOOD=SUBJUNCTIVE, PERSON=2, NUMBER=SG].", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 92, |
|
"text": "(Sylak-Glassman, 2016)", |
|
"ref_id": "BIBREF58" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Based Morphology", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We postpone a discussion of the details of UniMorph until \u00a77.1, but it is mostly compatible with other, similar schemes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Based Morphology", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Ackerman and Malouf (2013) distinguish two types of morphological complexity, which we elaborate on below. For a more general overview of morphological complexity, see .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Defining Complexity", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The first type, enumerative complexity (e-complexity), measures the number of surface morphosyntactic distinctions that a language makes within a part of speech.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Enumerative Complexity", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "Given a lexicon, our present paper will measure the e-complexity of the verb system as the average of the verb paradigm size |M ( )|, where ranges over all verb lexemes in domain(M ). Importantly, we define the size |m| of a paradigm m to be the number of distinct surface forms in the paradigm, rather than the number of slots. That is, |m| def = |range(m)| rather than |domain(m)|. Under our definition, nearly all English verb paradigms have size 4 or 5, giving the English verb system an e-complexity between 4 and 5. If m = M (walk Verb ), then |m| = 4, since range(m) = {walk, walks, walked, walking}. The manually constructed lexicon may define separate slots \u03c3 1 = [ TENSE=PRESENT, PERSON=1, NUMBER=SG ] and \u03c3 2 = [ TENSE=PRESENT, PERSON=2, NUMBER=SG ], but in this paradigm, those slots are not distinguished by any morphological marking: m.\u03c3 1 = m.\u03c3 2 = walk. Nor is the past tense walked distinguished from the past participle. This phenomenon is known as syncretism.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Enumerative Complexity", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "Why might the creator of a lexicon choose to define two slots for a syncretic form, rather than a single merged slot? Perhaps because the slots are not always syncretic: in the example above, one English verb, be, does distinguish \u03c3 1 and \u03c3 2 . 3 But an English lexicon that did choose to merge \u03c3 1 and \u03c3 2 could handle be by adding extra slots that are used only with be. A second reason is that the merged slot might be inelegant to describe using the feature bundle notation: English verbs (other than be) have a single form shared by the bare infinitive and all present tense forms except 3rd-person singular, but a single slot for this form could not be easily characterized by a single feature bundle, and so the lexicon creator might reasonably split it for convenience. A third reason might be an attempt at consistency across languages: In principle, an English lexicon is free to use the same slots as Sanskrit and thus list dual and plural forms for every English noun, which just happen to be identical in every case (complete syncretism).", |
|
"cite_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 246, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Enumerative Complexity", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "The point is that our e-complexity metric is insensitive to these annotation choices. It focuses on observable surface distinctions, and so does not care whether syncretic slots are merged or kept separate. Later, we will construct our i-complexity metric to have the same property.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Enumerative Complexity", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "The notion of e-complexity has a long history in linguistics. The idea was explicitly discussed as early as Sapir (1921) . More recently, Sagot (2013) has referred to this concept as counting complexity, referencing comparison of the complexity of creoles and non-creoles by McWhorter (2001) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 120, |
|
"text": "Sapir (1921)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 138, |
|
"end": 150, |
|
"text": "Sagot (2013)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 291, |
|
"text": "McWhorter (2001)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Enumerative Complexity", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "For a given part of speech, e-complexity appears to vary dramatically over the languages of the world. Whereas the regular English verb paradigm has 4-5 slots in our annotation, the Archi verb will have thousands (Kibrik, 1998) . However, does this make the Archi system more complex, in the sense of being more difficult to describe or learn? Despite the plethora of forms, it is often the case that one can regularly predict one form from another, indicating that few forms actually have to be memorized for each lexeme.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 227, |
|
"text": "(Kibrik, 1998)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Enumerative Complexity", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "The second notion of complexity is integrative complexity (i-complexity), which measures how regular an inflectional system is on the surface. Students of a foreign language will most certainly have encountered the concept of an irregular verb. Pinning down a formal and workable crosslinguistic definition is non-trivial, but the intuition that some inflected forms are regular and others irregular dates back at least to Bloomfield (1933, pp. 273-274) , [who famously argued that what makes a surface form regular is that it is the output of a deterministic function. For an in-depth dissection of the subject, see Stolz et al. (2012) . Ackerman and Malouf (2013) build their definition of i-complexity on the information-theoretic notion of entropy (Shannon, 1948) . Their intuition is that a morphological system should be considered complex to the extent that its forms are unpredictable. They say, for example, that the nominative singular form is unpredictable in a language if many verbs express it with suffix -o while many others use -\u2205. In \u00a75, we will propose an improvement to their entropy-based measure.", |
|
"cite_spans": [ |
|
{ |
|
"start": 423, |
|
"end": 453, |
|
"text": "Bloomfield (1933, pp. 273-274)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 617, |
|
"end": 636, |
|
"text": "Stolz et al. (2012)", |
|
"ref_id": "BIBREF56" |
|
}, |
|
{ |
|
"start": 639, |
|
"end": 665, |
|
"text": "Ackerman and Malouf (2013)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 752, |
|
"end": 767, |
|
"text": "(Shannon, 1948)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Integrative Complexity", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "The low-entropy conjecture, as formulated by Ackerman and Malouf (2013, p. 436) , ''is the hypothesis that enumerative morphological complexity is effectively unrestricted, as long as the average conditional entropy, a measure of integrative complexity, is low.'' Indeed, Ackerman and Malouf go so far as to say that there need be no upper bound on e-complexity, but the i-complexity must remain sufficiently low (as is the case for Archi, for example). Our hypothesis is subtly different in that we postulate that morphological systems face a trade-off between e-complexity and i-complexity: a system may be complex under either metric, but not under both. The amount of e-complexity permitted is higher when i-complexity is low.", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 79, |
|
"text": "Ackerman and Malouf (2013, p. 436)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Low-Entropy Conjecture", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "This line of thinking harks back to the equal complexity conjecture of Hockett, who stated: ''objective measurement is difficult, but impressionistically it would seem that the total grammatical complexity of any language, counting both the morphology and syntax, is about the same as any other'' (Hockett, 1958, pp. 180-181) . Similar trade-offs have been found in other branches of linguistics (see Oh [2015] for a review). For example, there is a trade-off between rate of speech and syllable complexity (Pellegrino et al., 2011) : This means that even though Spanish speakers utter many more syllables per second than Chinese, the overall information rate is quite similar as Chinese syllables carry more information (they contain tone information).", |
|
"cite_spans": [ |
|
{ |
|
"start": 297, |
|
"end": 325, |
|
"text": "(Hockett, 1958, pp. 180-181)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 532, |
|
"text": "(Pellegrino et al., 2011)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Low-Entropy Conjecture", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Hockett's equal complexity conjecture is controversial: some languages (such as Riau Indonesian) do seem low in complexity across morphology and syntax (Gil, 1994) . This is why Ackerman and Malouf instead posit that a linguistic system has bounded integrative complexity-it must not be too high, though it can be low, as indeed it is in isolating languages like Chinese and Thai.", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 163, |
|
"text": "(Gil, 1994)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Low-Entropy Conjecture", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "3 Paradigm Entropy", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Low-Entropy Conjecture", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Following Dreyer and Eisner (2009) and Cotterell et al. (2015) , we identify a language's inflectional system with a probability distribution p(M = m) over possible paradigms. 4 Our measure of i-complexity will be related to the entropy of this distribution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 34, |
|
"text": "Dreyer and Eisner (2009)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 39, |
|
"end": 62, |
|
"text": "Cotterell et al. (2015)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 177, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphology as a Distribution", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For instance, knowing the behavior of the English verb system essentially means knowing a joint distribution over 5-tuples of surface forms such as (run, runs, ran, run, running). More precisely, one knows probabilities such as p(M .pres = run, M .3s = runs, M .past = ran, M .pastp = run, M .presp = running).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphology as a Distribution", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We do not observe p directly, but each observed paradigm (5-tuple) can help us estimate it. We assume that the paradigms m in the inflected lexicon were drawn independently and identically distributed (IID) from p. Any novel verb paradigm in the future would be drawn from p as well. The distribution p represents the inflectional system because it describes what regular paradigms and plausible irregular paradigms tend to look like.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphology as a Distribution", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The fact that some paradigms are used more frequently than others (more tokens in a corpus) does not mean that they have higher probability under the morphological system p(m). Rather, their higher usage reflects the higher probability of their lexemes. That is due to unrelated factors-the probability of a lexeme may be modeled separately by a stick-breaking process (Dreyer and Eisner, 2011) , or may reflect the semantic meaning associated to that lexeme. The role of p(m) in the model is only to serve as the base distribution from which a lexeme type selects the tuple of strings m = M ( ) that will be used thereafter to express .", |
|
"cite_spans": [ |
|
{ |
|
"start": 369, |
|
"end": 394, |
|
"text": "(Dreyer and Eisner, 2011)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphology as a Distribution", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We expect the system to place low probability on implausible paradigms: For example, p(run, , , run, running) is close to zero. Moreover, we expect it to assign high conditional probability to the result of applying highly regular processes:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphology as a Distribution", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For example, for p(M .presp | M .3s) in English, we have p(wugging | wugs) \u2248 p(running | runs) \u2248 1, where wug is a novel verb. Nonetheless, our estimate of p(M .presp = w | M .3s = wugs) will have support over w \u2208 \u03a3 * \u00d7 \u2022 \u2022 \u2022 \u00d7 \u03a3 * , due to smoothing. The 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphology as a Distribution", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Formally speaking, we assume a discrete sample space in which each outcome is a possible lexeme equipped with a paradigm M ( ). Recall that a random variable is technically defined as a function of the outcome. Thus, M is a paradigm-valued random variable that returns the whole paradigm. M .past is a string-valued random expression that returns the past slot, so \u03c0(M .past = ran) is a marginal probability that marginalizes over the rest of the paradigm. model is thus capable of evaluating arbitrary wug-formations (Berko, 1958) , including irregular ones.", |
|
"cite_spans": [ |
|
{ |
|
"start": 518, |
|
"end": 531, |
|
"text": "(Berko, 1958)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphology as a Distribution", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The distribution p gives rise to the paradigm entropy H(M ), also written as H(p). This is the expected number of bits needed to represent a paradigm drawn from p, under a code that is optimized for this purpose. Thus, it may be related to the cost of learning paradigms or the cost of storing them in memory, and thus relevant to functional pressures that prevent languages from growing too complex. (There is no guarantee, of course, that human learners actually estimate the distribution p, or that its entropy actually represents the cognitive cost of learning or storing paradigms.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paradigm Entropy", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our definition of i-complexity in \u00a75 will (roughly speaking) divide H(M ) by the e-complexity, so that the i-complexity is measured in bits per distinct surface form. This approach is inspired by Ackerman and Malouf (2013); we discuss the differences in \u00a76.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paradigm Entropy", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We now review how to estimate H(M ) by estimating p by a model q. We do not actually know the true distribution p. Furthermore, even if we knew p, the definition of H(M ) involves a sum over the infinite set of n-tuples (\u03a3 * ) n , which is intractable for most distributions p. Thus, following Brown et al. (1992), we will use a probability model to define a good upper bound for H(M ) and held-out data to estimate that bound.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Variational Upper Bound on Entropy", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For any distribution p, the entropy H(p) is upper-bounded by the cross-entropy H(p, q), where q is any other distribution over the same space", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Variational Upper Bound on Entropy", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": ": 5 m p(m)[\u2212 log p(m)] \u2264 m p(m)[\u2212 log q(m)]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Variational Upper Bound on Entropy", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(1) (Throughout this paper, log denotes log 2 .) The gap between the two sides is the Kullback-Leibler divergence D(p || q), which is 0 iff p = q.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Variational Upper Bound on Entropy", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Maximum-likelihood training of a probability model q \u2208 Q is an attempt to minimize this Figure 1 : A specific Spanish verb paradigm as it would be generated by two different tree-structured Bayesian networks. The nodes in each network represent the slots dictated by the paradigm's shape (not labeled). The topology in (a) predicts all forms from the lemma. The topology in (b), on the other hand, makes it easier to predict some of the forms given the others: pongas is predicted from pongo, with which it shares a stem. Qualitatively, the structure selection algorithm in \u00a74.4 finds trees like (b).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 96, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Variational Upper Bound on Entropy", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "gap by minimizing the right-hand side. More precisely, it minimizes the sampling-based esti-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Variational Upper Bound on Entropy", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "mate mp train (m)[\u2212 log q(m)],", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Variational Upper Bound on Entropy", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "wherep train is the empirical distribution of a set of training examples that are assumed to be drawn IID from p.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Variational Upper Bound on Entropy", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Because the trained q may be overfit to the training examples, we must make our final estimate of H(p, q) using a separate set of held-out test examples, as mp test (m)[\u2212 log q(m)]. We then use this as our (upwardly biased) estimate of the paradigm entropy H(p). In our setting, both the training and the test examples are paradigms from a given inflected lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Variational Upper Bound on Entropy", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "To fit q given the training set, we need a tractable family Q of joint distributions over paradigms, with parameters \u03b8. The structure of the model and the number of parameters \u03b8 will be determined automatically from the training set: A language with more slots overall or more paradigm shapes will require more parameters. This means that Q is technically a semi-parametric family.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Generative Model of the Paradigm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We say that two paradigms m, m have the same shape if they define the same slots (that is, domain(m) = domain(m )) and the same pairs of slots are syncretic in both paradigms (that is, m.\u03c3 = m.\u03c3 iff m .\u03c3 = m .\u03c3 ). Notice that paradigms of the same shape must have the same size (but not conversely). Most English verbs use one of 2 shapes: In 4-form verbs such as regular sprint and irregular stand, the past participle is syncretic with the past tense, whereas in irregular 5-form verbs such as eat, that is not so. There are also a few other English verb paradigm shapes: For example, run has only 4 distinct forms, but in its paradigm, the past participle is syncretic with the present tense. The verb be has a shape of its own, with 8 distinct forms. The extra slots needed for be might be either missing in other shapes, or present but syncretic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paradigm Shapes", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Our model q \u03b8 says that the first step in generating a paradigm is to pick its shape s. This uses a distribution q \u03b8 (S = s), which we estimate by maximum likelihood from the training set. Thus, s ranges over the set S of shapes that appear in the training set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paradigm Shapes", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Next, conditioned on the shape s, we follow Cotterell et al. (2017b) and generate all the forms of the paradigm using a tree-structured Bayesian network-a directed graphical model in which the form at each slot is generated conditionally on the form at a single parent slot. Figure 1 illustrates two possible tree structures for Spanish verbs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 68, |
|
"text": "Cotterell et al. (2017b)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 283, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Tree-Structured Distribution", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Each paradigm shape s has its own tree structure. If slot \u03c3 exists in shape s, we denote its parent in our shape s model by pa s (\u03c3). Then our model is 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Tree-Structured Distribution", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "q \u03b8 (m | s) = \u03c3\u2208s q \u03b8 (m.\u03c3 | m.pa s (\u03c3), S = s)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Tree-Structured Distribution", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "(2) For the slot \u03c3 at root of the tree, pa s (\u03c3) is defined to be a special slot empty with an empty feature bundle, whose form is fixed to be the empty string. In the product above, \u03c3 does not range over empty.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Tree-Structured Distribution", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We model all of the conditional probability factors in equation 2using a neural sequence-tosequence model with parameters \u03b8. Specifically, we follow Kann and Sch\u00fctze (2016) and use a long short-term memory-based sequence-to-sequence (seq2seq) model (Sutskever et al., 2014) with attention (Bahdanau et al., 2015) . This is the state of the art in morphological reinflection (i.e., the conversion of one inflected form to another [Cotterell et al., 2016] ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 172, |
|
"text": "Kann and Sch\u00fctze (2016)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 249, |
|
"end": 273, |
|
"text": "(Sutskever et al., 2014)", |
|
"ref_id": "BIBREF57" |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 312, |
|
"text": "(Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 429, |
|
"end": 453, |
|
"text": "[Cotterell et al., 2016]", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Sequence-to-Sequence Model", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For example, in German, q \u03b8 (M .nompl = H\u00e4nde | M .nomsg = Hand, S = 3) is given by the probability that the seq2seq model assigns to the output sequence H\u00e4 n d e when given the input sequence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Sequence-to-Sequence Model", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The input sequence indicates the parent slot (nominative singular) and the child slot (nominative plural), by using special characters to specify their feature bundles. This tells the seq2seq model what kind of inflection to do. The input sequence also indicates the paradigm shape s. Thus, we are able to use only a single seq2seq model, with parameters \u03b8, to handle all of the conditional distributions in the entire model. Sharing parameters across conditional distributions is a form of multi-task learning and may improve generalization to held-out data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "H a n d S=3 IN=NOM IN=SG OUT=NOM OUT=PL", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As a special case, if \u03c3 and \u03c3 are syncretic within shape s, then we define q \u03b8 (M .\u03c3 = w | M .\u03c3 = w , S = s) to be 1 if w = w and 0 otherwise. The seq2seq model is skipped in such cases: It is only used on non-syncretic parent-child pairs. As a result, if shape s has 5 slots that are all syncretic with one another, 4 of these slots can be derived by deterministic copying. As they are completely predictable, they contribute log 1 = 0 bits to the paradigm entropy. The method in the next section will always favor a tree structure that exploits copying. As a result, the extra 4 slots will not increase the i-complexity, just as they do not increase the e-complexity (recall \u00a72.2.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "H a n d S=3 IN=NOM IN=SG OUT=NOM OUT=PL", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We train the parameters \u03b8 on all non-syncretic slot pairs in the training set. Thus, a paradigm with n distinct forms contributes n 2 training examples: Each form in the paradigm is predicted from each of the n \u2212 1 other forms, and from the empty form. We use maximum-likelihood training (see \u00a77.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "H a n d S=3 IN=NOM IN=SG OUT=NOM OUT=PL", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "H a n d S=3 IN=NOM IN=SG OUT=NOM OUT=PL", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given a model q \u03b8 , we can decompose its entropy H(q \u03b8 ) into a weighted sum of conditional entropies", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structure Selection", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "H(M ) = H(S) + s\u2208S p(S = s)H(M | S = s) (3) where H(M | S = s) = \u03c3\u2208s H(M .\u03c3 | M .pa s (\u03c3), S = s) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structure Selection", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The cross-entropy H(p, q \u03b8 ) has a similar decomposition. The only difference is that all of the (conditional) entropies are replaced by (conditional) cross-entropies, meaning that they are estimated using a held-out sample from p rather than q \u03b8 . The log-probabilities are still taken from q \u03b8 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structure Selection", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "It follows that given a fixed \u03b8 (as trained in the previous section), we can minimize H(p, q \u03b8 ) by choosing the tree for each shape s that minimizes the cross-entropy version of equation 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structure Selection", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "How? For each shape s, we select the minimumweight directed spanning tree over the n s slots used by that shape, as computed by the Chu-Liu-Edmonds algorithm (Edmonds, 1967) . 7 The weight of each potential directed edge \u03c3 \u2192 \u03c3 is the conditional cross-entropy H(M .\u03c3 | M .\u03c3 , S = s) under the seq2seq model trained in the previous section, so equation 4implies that the weight of a tree is the cross-entropy we would get by selecting that tree. 8 In practice, we estimate the conditional cross-entropy for the non-syncretic slot pairs using a held-out development set (not the test set). For syncretic slot pairs, which are handled by copying, the conditional cross-entropy is always 0, so edges between syncretic slots can be selected free of cost.", |
|
"cite_spans": [ |
|
{ |
|
"start": 158, |
|
"end": 173, |
|
"text": "(Edmonds, 1967)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 177, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structure Selection", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "After selecting the tree, we could retrain the seq2seq parameters \u03b8 to focus on the conditional distributions we actually use, training on only the slot pairs in each training paradigm that correspond to an tree edge in the model of that paradigm's shape. Our experiments in \u00a77 omitted this step. But in fact, training on all n 2 pairs may even have found a better \u03b8: It can be seen as a form of multi-task regularization (available also to human learners).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structure Selection", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Having defined a way to approximate paradigm entropy, H(M ), we finally operationalize our measure of i-complexity for a language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "One Paradigm Shape. We start with the simple case where the language has a single paradigm shape: S = {s}. Our initial idea was to define i-complexity as bits per form, H(M ) / |s|, where |s| is the enumerative complexity-the number of distinct forms in the paradigm. However, H(M ) reflects not only the language's morphological complexity, but also its 8 Where the weight of the tree is taken to include the weight of the special edge empty \u2192 \u03c3 to the root node \u03c3. Thus, for each slot \u03c3, the weight of empty \u2192 \u03c3 is the cost of selecting \u03c3 as the root. It is an estimate of H(M .\u03c3 | S = s), the difficulty of predicting the \u03c3 form without any parent.", |
|
"cite_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 356, |
|
"text": "8", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the implementation, we actually decrement the weight of every edge \u03c3 \u2192 \u03c3 (including when \u03c3 = empty) by the weight of empty \u2192 \u03c3. This does not change the optimal tree, because it does not change the relative weights of the possible parents of \u03c3. However, it ensures that every \u03c3 now has root cost 0, as required by the Chu-Liu-Edmonds algorithm (which does not consider root costs). Notice that because H(X) \u2212 H(X | Y ) = I(X; Y ), the decremented weight is actually an estimate of \u2212I(M .\u03c3; M .\u03c3 ). Thus, finding the min-weight tree is equivalent to finding the tree that maximizes the total mutual information on the edges, just like the Chow-Liu algorithm (Chow and Liu, 1968) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 660, |
|
"end": 680, |
|
"text": "(Chow and Liu, 1968)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "''lexical complexity.'' Some of the bits needed to specify a lexeme's paradigm m are necessary merely to specify the stem. A language whose stems are numerous or highly varied will tend to have higher H(M ), but we do not wish to regard it as morphologically complex simply on that basis. We can decompose H(M ) into", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "H(M ) = H(M .\u03c3) lexical entropy + H(M | M .\u03c3) morphological entropy (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where\u03c3 denotes the single lowest-entropy slot,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c3 def = argmin \u03c3 H(M .\u03c3)", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "and we estimate H(M .\u03c3) for any \u03c3 using the seq2seq distribution q \u03b8 (M .\u03c3 = w | M.empty = ), which can be regarded as a model for generating forms of slot \u03c3 from scratch. We will refer to\u03c3 as the lemma because it gives in some sense the simplest form of the lexeme, although it is not necessarily the slot that lexicographers use as the citation form for the lexeme.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We now define i-complexity as the entropy per form when predicting the remaining forms of M from the lemma:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "H(M | M .\u03c3) |s| \u2212 1", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where the numerator can be obtained by subtraction via equation 5. This is a fairer representation of the morphological irregularity (e.g., the average difficulty in predicting the inflectional ending that is added to a given stem). Notice that if |s| = 1 (an isolating language), the morphological complexity is appropriately undefined, since no inflectional endings are ever added to the stem. If we had allowed the lexical entropy H(M .\u03c3) to remain in the numerator, then a language with larger e-complexity |s| would have amortized that term over more forms-meaning that larger e-complexity would have tended to lead to lower i-complexity, other things equal. By removing that term from the numerator, our definition (7) eliminates this as a possible reason for the observed tradeoff between e-complexity and i-complexity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Multiple Paradigm Shapes. Now, we consider the more general case where multiple paradigm shapes are allowed: |S| \u2265 1. Again we are interested in the entropy per non-lemma form. The i-complexity is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "H(S) + s p(S = s)H(M | M .\u03c3(s), S = s) s p(S = s)(|s| \u2212 1) (8) where\u03c3 (s) def = argmin \u03c3 H(M .\u03c3 | S = s)", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the case where |s| and\u03c3(s) are constant over all S, this reduces to equation 7. This is because the numerator is essentially an expanded formula for the conditional entropy in (7)-the only wrinkle is that different parts of it condition on different slots.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To estimate equation 8 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "N i=1 \u2212 \uf8eb \uf8ed log q(S = s i ) + log q(M = m i | S = s i ) \u2212 log q(M .\u03c3(s i ) = m i .\u03c3(s i ) | S = s i ) \uf8f6 \uf8f8 N i=1 |s i | \u2212 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "(10) where we have multiplied both the numerator and denominator by N . In short, the denominator is the total number of non-lemma forms in the test set, and the numerator is the total number of bits that our model needs to predict these forms (including the paradigm shapes s i ) given the lemmas. The numerator of equation 10is an upper bound on the numerator of equation 8since it uses (conditional) cross-entropies rather than (conditional) entropies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Paradigm Entropy to i-Complexity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Ackerman and Malouf (2013)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Methodological Comparison to", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our formulation of the low-entropy principle differs somewhat from Ackerman and Malouf (2013) ; the differences are highlighted below.", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 93, |
|
"text": "Ackerman and Malouf (2013)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Methodological Comparison to", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Heuristic Approximation to p. Ackerman and Malouf (2013) first construct what we regard as a heuristic approximation to the joint distribution p over forms in a paradigm. They provide a heuristically chosen candidate set of potential inflections. Then, they consider a distribution r(m.\u03c3 | m.\u03c3 ) that selects among those forms. In contrast to our neural sequence-to-sequence approach, this distribution unfortunately does not have support over \u03a3 * and, thus, cannot consider changes other than substitution of morphological exponents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 56, |
|
"text": "Ackerman and Malouf (2013)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Methodological Comparison to", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As a concrete example of r, consider Table 1' . . -a) swapsa for \u2205 with probability 2 /3 and for -o with probability 1 /3. We reiterate that no other output has positive probability under their model, for example, swapping -a for -es or ablaut of a stem vowel. In contrast, our p allows arbitrary irregulars ( \u00a76.1).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 45, |
|
"text": "Table 1'", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Methodological Comparison to", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Average Conditional Entropy. The second difference is their use of the pairwise conditional entropies between cells. They measure the complexity of the entire paradigm by the average conditional entropy:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Methodological Comparison to", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "1 n 2 \u2212 n \u03c3 \u03c3 =\u03c3 H(M .\u03c3 | M .\u03c3 ).", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "A Methodological Comparison to", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "This differs from our tree-based measure, in which an irregular form only needs to be derived from its parent-possibly a similar or even syncretic irregular form-rather than from all other forms in the paradigm. So it ''only needs to pay once'' and it even ''shops around for the cheapest deal.'' Also, in our measure, the lemma does not ''pay'' at all.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Methodological Comparison to", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Ackerman and Malouf measure conditional entropies, which are simple to compute because their model q is simple. (Again, it only permits a small number of possible outputs for each input, based on the finite set of allowed morpheme substitutions that they annotated by hand.) In contrast, our estimate uses conditional cross-entropies, asking whether our q can predict real held-out forms distributed according to p. (Ralli, 1994 (Ralli, , 2002 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 416, |
|
"end": 428, |
|
"text": "(Ralli, 1994", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 429, |
|
"end": 443, |
|
"text": "(Ralli, , 2002", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Methodological Comparison to", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "SINGULAR PLURAL CLASS NOM GEN ACC VOC NOM GEN ACC VOC 1 -os -u -on -e -i -on -us -i 2 -s -\u2205 -\u2205 -\u2205 -es -on -es -es 3 -\u2205 -s -\u2205 -\u2205 -es -on -es -es 4 -\u2205 -s -\u2205 -\u2205 -is -on -is -is 5 -o -u -o -o -a -on -a -a 6 -\u2205 -u -\u2205 -\u2205 -a -on -a -a 7 -os -us -os -os -i -on -i -i 8 -\u2205 -os -\u2205 -\u2205 -a -on -a -a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Methodological Comparison to", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Now, we offer a critique of Ackerman and Malouf (2013) on three points: (i) different linguistic theories dictating how words are subdivided into morphemes may offer different results, (ii) certain types of morphological irregularity, particularly suppletion, aren't handled, and (iii) average conditional entropy overestimates the i-complexity in comparison to joint entropy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critique of Ackerman and Malouf (2013)", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Theory-Dependent Complexity. We consider a classic example from English morphophonology that demonstrates the effect of the specific analysis chosen. In regular English plural formation, the speaker has three choices: and [ z] . Here are two potential analyses. One could treat this as a case of pure allomorphy with three potential, unrelated suffixes. Under such an analysis, the entropy will reflect the empirical frequency of the three possibilities found in some data set: roughly, 1 /4 log 1 /4 + 3 /8 log 3 /8 + 3 /8 log 3 /8 \u2248 1.56127. On the other hand, if we assume a different model with a unique underlying affix /z/, which is attached and then converted to either [z], [s], or [ z] by an application of perfectly regular phonology, this part of the morphological system of English has entropy of 0-one choice. See Kenstowicz (1994, p. 72) for a discussion of these alternatives from a theoretical standpoint. Note that our goal is not to advocate for one of these analyses, but merely to suggest that Ackerman and Malouf (2013)'s quantity is analysis-dependent. 9 In contrast, our 9 Bonami and Beniamine (2016) have made a similar point (Rob Malouf, personal communication). Other proposed morphological complexity metrics have relied on a similar assumption (e.g., Bane, 2008) . approach is theory-agnostic in that we jointly learn surface-to-surface transformations, reminiscent of a-morphorous morphology (Anderson, 1992) , and thus our estimate of paradigm entropy does not suffer this drawback. Indeed, our assumptions are limited-recurrent neural networks are universal approximators. It has been shown that any computable function can be computed by some finite recurrent neural network Sontag, 1991, 1995) . Thus, the only true assumption we make of morphology is mild: We assume it is Turing-computable. That behavior is Turingcomputable is a rather fundamental tenet of cognitive science (McCulloch and Pitts, 1943; Sobel and Li, 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 218, |
|
"end": 226, |
|
"text": "and [ z]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 682, |
|
"end": 694, |
|
"text": "[s], or [ z]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 827, |
|
"end": 851, |
|
"text": "Kenstowicz (1994, p. 72)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1094, |
|
"end": 1095, |
|
"text": "9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1279, |
|
"end": 1290, |
|
"text": "Bane, 2008)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1421, |
|
"end": 1437, |
|
"text": "(Anderson, 1992)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1707, |
|
"end": 1726, |
|
"text": "Sontag, 1991, 1995)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1911, |
|
"end": 1938, |
|
"text": "(McCulloch and Pitts, 1943;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 1939, |
|
"end": 1958, |
|
"text": "Sobel and Li, 2013)", |
|
"ref_id": "BIBREF54" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critique of Ackerman and Malouf (2013)", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "[z], [s],", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critique of Ackerman and Malouf (2013)", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "In our approach, theory dependence is primarily introduced through the selection of slots in our paradigms, which is a form of bias that would be present in any human-derived set of morphological annotations. A key example of this is the way in which different annotators or annotation standards may choose to limit or expand syncretismsituations where the same string-identical form may fill multiple different paradigm slots. For example, Finnish has two accusative inflections for nouns and adjectives, one always coinciding in form with the nominative and the other coinciding with the genitive. Many grammars therefore omit these two slots in the paradigm entirely, although some include them. Depending on which linguistic choice annotators make, the language could appear to have more or fewer paradigm slots. We have carefully defined our e-complexity and i-complexity metrics so that they are not sensitive to these choices.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critique of Ackerman and Malouf (2013)", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "As a second example of annotation dependence, different linguistic theories might disagree about which distinctions constitute productive inflectional morphology, and which are derivational or even fixed lexical properties. For example, our dataset for Turkish treats causative verb forms as derivationally related lexical items. The number of apparent slots in the Turkish inflectional paradigms is reduced because these forms were excluded.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critique of Ackerman and Malouf (2013)", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Morphological Irregularity. A second problem with the model in Ackerman and Malouf (2013) is its inability to treat certain kinds of irregularity, particularly cases of suppletion. As far as we can tell, the model is incapable of evaluating cases of morphological suppletion unless they are explicitly encoded in the model. Consider, again, the case of the English suppletive past tense form wentif one's analysis of the English base is effectively a distribution of the choices add [d], add [t], and [ d], one will assign probability 0 to went as the past tense of go. We highlight the importance of this point because suppletive forms are certainly very common in academic English: the plural of binyan is binyanim and the plural of lemma is lemmata. It is unlikely that native English speakers possess even a partial model of Hebrew and Greek nominal morphology-a more plausible scenario is simply that these forms are learned by rote. As speakers and hearers are capable of producing and understanding these forms, we should demand the same capacity of our models. Not doing so also ties into the point in the previous section about theory-dependence since it is ultimately the linguist-supported by some theoretical notion-who decides which forms are deemed irregular and hence left out of the analysis. We note that these restrictive assumptions are relatively common in the literature, for example, Allen and Becker (2015)'s sublexical learner is likewise incapable of placing probability mass on irregulars. 10 Average Conditional Entropy versus Joint Entropy. Finally, we take issue with the formulation of paradigm entropy as average conditional entropy, as exhibited in equation (11). For one, it does not correspond to the entropy of any actual joint distribution p(M ), and has no obvious mathematical interpretation. Second, it is Priscian (Robins, 2013) in its analysis in that any form can be generated from any other, which, in practice, will cause it to overestimate the i-complexity of a morphological system. Consider the German dative plural H\u00e4nden (from the German Hand ''hand''). Predicting this form from the nominative singular Hand is difficult, but predicting it from the nominative plural H\u00e4nde is trivial: just add the suffix -n. In Ackerman and Malouf (2013) 's formulation, r(H\u00e4nden | Hand) and r(H\u00e4nden | H\u00e4nde) both contribute to the paradigm's entropy with the former substantially raising the quantity. Our method in \u00a74.4 is able to select the second term and regard H\u00e4nden as predictable once H\u00e4nde is in hand.", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 89, |
|
"text": "Ackerman and Malouf (2013)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1516, |
|
"end": 1518, |
|
"text": "10", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2262, |
|
"end": 2288, |
|
"text": "Ackerman and Malouf (2013)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critique of Ackerman and Malouf (2013)", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Our experimental design is now fairly straightforward: plot e-complexity versus i-complexity over as many languages as possible, We then devise a numerical test of whether the complexity trade-off conjecture ( \u00a71) appears to hold.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "At the moment, the largest source of annotated full paradigms is the UniMorph dataset (Sylak-Glassman et al., 2015; Kirov et al., 2018) , which contains data that have been extracted from Wiktionary, as well as other morphological lexica and analyzers, and then converted into a universal format. A partial subset of UniMorph has been used in the running of the SIGMORPHON-CoNLL 2017 and 2018 shared tasks on morphological inflection generation (Cotterell et al., 2017a (Cotterell et al., , 2018b .", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 115, |
|
"text": "(Sylak-Glassman et al., 2015;", |
|
"ref_id": "BIBREF59" |
|
}, |
|
{ |
|
"start": 116, |
|
"end": 135, |
|
"text": "Kirov et al., 2018)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 445, |
|
"end": 469, |
|
"text": "(Cotterell et al., 2017a", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 496, |
|
"text": "(Cotterell et al., , 2018b", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and UniMorph Annotation", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "We use verbal paradigms from 33 typologically diverse languages, and nominal paradigms from 18 typologically diverse languages. We only considered languages that had at least 700 fully annotated verbal or nominal paradigms, as the neural methods we deploy required a large amount of training example to achieve high performance. 11 As the neural methods require a large set of annotated training examples to achieve high performance, it is difficult to use them in a lower-resource scenario.", |
|
"cite_spans": [ |
|
{ |
|
"start": 329, |
|
"end": 331, |
|
"text": "11", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and UniMorph Annotation", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "To estimate a language's e-complexity ( \u00a72.2.1), we average over all paradigms in the UniMorph inflected lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and UniMorph Annotation", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "To estimate i-complexity, we first partition those paradigms into training, development and test sets. We identify the paradigm shapes from the training set ( \u00a74.1). We also use the training set to train the parameters \u03b8 of our conditional distribution ( \u00a74.3), then estimate conditional entropies on the development set and use Edmonds's algorithm to select a global model structure for each shape ( \u00a74.4). Now we evaluate i-complexity on the test set (equation 10). Using held-out test data gives an unbiased estimate of a model's predictive ability, which is why it is standard practice in statistical NLP, though less common in quantitative linguistics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and UniMorph Annotation", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "We experiment separately on nominal and verbal lexicons. For i-complexity, we hold out at random 50 full paradigms for the development set, and 50 other full paradigms for the test set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Details", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "For comparability across languages, we tried to ensure a ''standard size'' for the training set D train . We sampled it from the remaining data using two different designs, to address the fact that different languages have different-size paradigms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Details", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "Equal Number of Paradigms (''purple scheme'').", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Details", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "In the first regime, D train (for each language) is derived from 600 randomly chosen non-held-out paradigms m. We trained the reinflection model in \u00a74.4 on all non-syncretic pairs within these paradigms, as described in \u00a74.3. This disadvantages languages with small paradigms, as they train on fewer pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Details", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "Equal Number of Pairs (''green scheme''). In the second regime, we trained the reinflection model in \u00a74.4 on 60,000 non-syncretic pairs (m.\u03c3 , m.\u03c3) (where \u03c3 may be empty) sampled without replacement from the non-held-out paradigms. 12 This matches the amount of training data, but may disadvantage languages with large paradigms, since the reinflection model will see fewer examples of any individual mapping be-tween paradigm slots. We call this the ''green scheme.'' Model and Training Details. We train the seq2seqwith-attention model using the OpenNMT toolkit (Klein et al., 2017) . We largely follow the recipe given in Kann and Sch\u00fctze (2016) , the winning submission on the 2016 SIGMORPHON shared task for inflectional morphology. Accordingly, we use a character embedding size of 300, and 100 hidden units in both the encoder and decoder. Our gradient-based optimization method was AdaDelta (Zeiler, 2012) with a minibatch size of 80. We trained for 20 epochs, which yielded 20 models via early stopping. We selected the model that achieved the highest average log p(m.\u03c3 | m.\u03c3 ) on (\u03c3 , \u03c3) pairs from the development set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 234, |
|
"text": "12", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 564, |
|
"end": 584, |
|
"text": "(Klein et al., 2017)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 625, |
|
"end": 648, |
|
"text": "Kann and Sch\u00fctze (2016)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Details", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "Our results are [plotted in Figure 2 , where each dot represents a language. We see little difference between the green and the purple training sets, though it was not clear a priori that this would be so.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 36, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The plots appear to show a clear trade-off between i-complexity and the e-complexity. We now provide quantitative support for this impression by constructing a statistical significance test. Visually, our low-entropy trade-off conjecture boils down to the claim that languages cannot exist in the upper right-hand corner of the graph, that is, they cannot have both high e-complexity and high i-complexity. In other words, the upper-right hand corner of the graph is ''emptier'' than it would be by chance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "How can we quantify this? The Pareto curve for a multi-objective optimization problem shows, for each x, the maximum value y of the second objective that can be achieved while keeping the first objective \u2265 x (and vice-versa). This is shown in Figure 2 as a step curve, showing the maximum i-complexity y that was actually achieved for each level x of e-complexity. This curve is the tightest non-increasing function that upper-bounds all of the observed points: We have no evidence from our sample of languages that any language can appear above the curve.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 251, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We say that the upper right-hand corner is ''empty'' to the extent that the area under the Pareto curve is small. To ask whether it is indeed emptier than would be expected by chance, we Figure 2 : The x-axis is our measure of e-complexity, the average number of distinct forms in a paradigm. The y-axis is our estimate of i-complexity, the average bits per distinct non-lemma form. We overlay purple and green graphs ( \u00a77.2): to obtain the y coordinate, all the languages are trained on the same number of paradigms (purple scheme) or on the same number of slot pairs (green scheme). The purple curve is the Pareto curve for the purple points, and the area under it is shaded in purple; similarly for green. The languages are labeled with their two-letter ISO 639-1 codes in white text inside the colored circles.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 195, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "perform a nonparametric permutation test that destroys the claimed correlation between the e-complexity and i-complexity values. From our observed points {(x 1 , y 1 ), . . . , (x m , y m )}, we can stochastically construct a new set of points {(x 1 , y \u03c3(1) ), . . . , (x m , y \u03c3(m) )} where \u03c3 is a permutation of 1, 2, . . . , m selected uniformly at random. The resulting scatterplot is what we would expect under the null hypothesis of no correlation. Our p-value is the probability that the new scatterplot has an even emptier upper righthand corner-that is, the probability that the area under the null-hypothesis Pareto curve is less than or equal to the area under the actually observed Pareto curve. We estimate this probability by constructing 10,000 random scatterplots.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In the purple training scheme, we find that the upper right-hand corner is significantly empty, with p < 0.021 and p < 0.037 for the verbal and nominal paradigms, respectively. In the green training scheme, we find that the upper right-hand corner is significantly empty with p < 0.032 and p < 0.024 in the verbal and nominal paradigms, respectively. 9 Future Directions Frequency. Ackerman and Malouf hypothesized that i-complexity is bounded, and we have demonstrated that the bounds are stronger when e-complexity is high. This suggests further investigation as to where in the language these bounds apply. Such bounds are motivated by the notion that naturally occurring languages must be learnable. Presumably, languages with large paradigms need to be regular overall, because in such a language, the average word type is observed too rarely for a learner to memorize an irregular surface form for it. Yet even in such a language, some word types are frequent, because some lexemes and some slots are especially useful. Thus, if learnability of the lexicon is indeed the driving force, 13 then we should make the finer-grained prediction that irregularity may survive in the more frequently observed word types, regardless of paradigm size. Rarer forms are more likely to be predictable-meaning that they are either regular, or else irregular in a way that is predictable from a related frequent irregular (Cotterell et al., 2018a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1412, |
|
"end": 1437, |
|
"text": "(Cotterell et al., 2018a)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Dynamical models. We could even investigate directly whether patterns of morphological irregularity can be explained by the evolution of language through time. Languages may be shaped by natural selection or, more plausibly, by noisy transmission from each generation to the next (Hare and Elman, 1995; Smith et al., 2008) , in a natural communication setting where each learner observes some forms more frequently than others. Are naturally occurring inflectional systems more learnable (at least by machine learning algorithms) than would be expected by chance? Do artificial languages with unusual properties (for example, unpredictable rare forms) tend to evolve into languages that are more typologically natural?", |
|
"cite_spans": [ |
|
{ |
|
"start": 280, |
|
"end": 302, |
|
"text": "(Hare and Elman, 1995;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 303, |
|
"end": 322, |
|
"text": "Smith et al., 2008)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We might also want to study whether children's morphological systems increase in i-complexity as they approach the adult system. Interestingly, this definition of i-complexity could also explain certain issues in first language acquisition, where children often overregularize (Pinker and Prince, 1988) : They impose the regular pattern on irregular verbs, producing forms like instead of ran. Children may initially posit an inflectional system with lower i-complexity, before converging on the true system, which has higher i-complexity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 277, |
|
"end": 302, |
|
"text": "(Pinker and Prince, 1988)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Phonology Plus Orthography. A human learner of a written language also has access to phonological information that could affect predictability. One could, for example, jointly model all the written and spoken forms within each paradigm, where the Bayesian network may sometimes predict a spoken slot from a written slot or vice-versa.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Moving Beyond the Forms. The complexity of morphological inflection is only a small bit of the larger question of morphological typology. We have left many bits unexplored. In this paper, we have predicted orthographic forms from morphosyntactic feature bundles. Ideally, we would like to also predict which morphosyntactic bundles are realized as words within a language, and which bundles are syncretic. That is, what paradigm shapes are plausible or implausible?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In addition, our current treatment depends upon a paradigmatic treatment of morphology, which is why we have focused on inflectional morphology. In contrast, derivational morphology is often viewed as syntagmatic. 14 Can we devise quantitative formulation of derivational complexity-for example, extending to polysynthetic languages?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We have provided clean mathematical formulations of enumerative and integrative complexity of inflectional systems, using tools from generative modeling and deep learning. With an empirical study on noun and verb systems in 36 typologically diverse languages, we have exhibited a Paretostyle trade-off between the e-complexity and i-complexity of morphological systems. In short, a morphological system can mark a large number of morphosyntactic distinctions, as Finnish, Turkish, and other agglutinative and polysynthetic languages do; or it may have a high-level of unpredictability (irregularity); or neither. 15 But it cannot do both.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "10" |
|
}, |
|
{ |
|
"text": "The NLP community often focuses on e-complexity and views a language as morphologically complex if it has a profusion of unique forms, even if they are very predictable. The reason is probably our habit of working at the word-level, so that all forms not found in the training set are out-of-vocabulary (OOV). Indeed, NLP practitioners often use high OOV rates as a proxy for defining morphological complexity. However, as NLP moves to the character-level, we need other definitions of morphological richness. A language like Hungarian, with almost perfectly predictable morphology, may be easier to process than a language like German, with an abundance of irregularity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "10" |
|
}, |
|
{ |
|
"text": "See Baerman (2015, Part II) for a tour of alternative views of inflectional paradigms.2 We assume that the lexicon never contains distinct triples of the form ( , \u03c3, w) and ( , \u03c3, w ), so that M ( ).\u03c3 has a unique value if it is defined at all.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This verb has a paradigm of size 8: {be, am, are, is, was, were, been, being}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The same applies for conditional entropies as used in \u00a75.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Below, we will define the factors so that the generated m does-usually-have shape s. We will ensure that if two slots are syncretic in shape s, then their forms are in fact equal in m. But non-syncretic slots will also have a (tiny) probability of equal forms, so the model q \u03b8 (m | s) is deficient-it sums to slightly < 1 over the paradigms m that have shape s.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Similarly,Chow and Liu (1968) find the best treestructured undirected graphical model by computing the max-weighted undirected spanning tree. We need a directed model instead because \u00a74.3 provides conditional distributions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the computer science literature, it is far more common to construct distributions with support over \u03a3 *(Paz, 2003;Bouchard-C\u00f4t\u00e9 et al., 2007;Dreyer et al., 2008;Cotterell et al., 2014), which do not have this problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Focusing on data-rich languages should also help mitigate sample bias caused by variable-sized dictionaries in our database. In many languages, irregular words are also very frequent and may be more likely to be included in a dictionary first. If that's the case, smaller dictionaries might have lexical statistics skewed toward irregulars more so than larger dictionaries. In general, larger dictionaries should be more representative samples of a language's broader lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For a few languages, fewer than 60,000 pairs were available, in which case we used all pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Rather than, say, description length of the lexicon(Rissanen and Ristad, 1994).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For paradigmatic treatments of derivational morphology, seeCotterell et al. (2017c) for a computational perspective and the references therein for theoretical perspectives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Carstairs-McCarthy (2010) has pointed out that languages need not have morphology at all, though they must have phonology and syntax.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This material is based upon work supported in part by the National Science Foundation under grant no. 1718846. The first author was supported by a Facebook Fellowship. We want to thank Rob Malouf for providing extensive and very helpful feedback on multiple versions of the paper. However, the opinions in this paper are our own: Our acknowledgment does not constitute an endorsement by Malouf. We would also like to thank the anonymous reviewers along with action editor Chris Dyer and editor-in-chief Lillian Lee.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Morphological organization: The low conditional entropy conjecture", |
|
"authors": [ |
|
{ |
|
"first": "Farrell", |
|
"middle": [], |
|
"last": "Ackerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Malouf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Language", |
|
"volume": "89", |
|
"issue": "3", |
|
"pages": "429--464", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Farrell Ackerman and Robert Malouf. 2013. Mor- phological organization: The low conditional entropy conjecture. Language, 89(3):429-464.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Unpublished manuscript, University of British Columbia and Stony Brook University", |
|
"authors": [ |
|
{ |
|
"first": "Blake", |
|
"middle": [], |
|
"last": "Allen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Becker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blake Allen and Michael Becker. 2015. Learning alternations from surface forms with sublexical phonology. Unpublished manuscript, Univer- sity of British Columbia and Stony Brook Uni- versity. Available as https://ling.auf.net/lingbuzz/ 002503.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A-Morphous Morphology", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Stephen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Anderson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "", |
|
"volume": "62", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen R. Anderson. 1992. A-Morphous Morphol- ogy, volume 62, Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Word Formation in Generative Grammar. Number 1 in Linguistic Inquiry Monographs", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Aronoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1976, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Aronoff. 1976. Word Formation in Gener- ative Grammar. Number 1 in Linguistic Inquiry Monographs. MIT Press, Cambridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The Oxford Handbook of Inflection", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Baerman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Oxford Handbooks in Linguistic. Part II: Paradigms and their Variants", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Baerman. 2015. The Oxford Handbook of Inflection. Oxford Handbooks in Linguistic. Part II: Paradigms and their Variants", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Understanding and measuring morphological complexity: An introduction", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Baerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dunstan", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greville", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Corbett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Baerman, Dunstan Brown, and Greville G. Corbett. 2015. Understanding and measuring morphological complexity: An introduction. In Matthew Baerman, Dunstan Brown, and", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Understanding and measuring morphological complexity", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Greville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Corbett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greville G. Corbett, editors, Understanding and measuring morphological complexity. Oxford University Press.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Pro- ceedings of ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Quantifying and measuring morphological complexity", |
|
"authors": [ |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Bane", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 26th West Coast Conference on Formal Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "69--76", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Max Bane. 2008. Quantifying and measuring mor- phological complexity. In Proceedings of the 26th West Coast Conference on Formal Lin- guistics, pages 69-76.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The child's learning of English morphology", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Berko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1958, |
|
"venue": "Word", |
|
"volume": "14", |
|
"issue": "2-3", |
|
"pages": "150--177", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean Berko. 1958. The child's learning of English morphology. Word, 14(2-3):150-177.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Language", |
|
"authors": [ |
|
{ |
|
"first": "Leonard", |
|
"middle": [], |
|
"last": "Bloomfield", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1933, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leonard Bloomfield. 1933. Language, University of Chicago Press. Reprint edition (October 15, 1984).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Joint predictiveness in inflectional paradigms", |
|
"authors": [ |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Bonami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Beniamine", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Word Structure", |
|
"volume": "9", |
|
"issue": "2", |
|
"pages": "156--182", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olivier Bonami and Sarah Beniamine. 2016. Joint predictiveness in inflectional paradigms. Word Structure, 9(2):156-182.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A probabilistic approach to diachronic phonology", |
|
"authors": [ |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Bouchard-C\u00f4t\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "887--896", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandre Bouchard-C\u00f4t\u00e9, Percy Liang, Thomas Griffiths, and Dan Klein. 2007. A probabilistic approach to diachronic phonology. In Proceed- ings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 887-896, Prague.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "An estimate of an upper bound for the entropy of English", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Computational Linguistics", |
|
"volume": "18", |
|
"issue": "1", |
|
"pages": "31--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lai. 1992. An estimate of an upper bound for the entropy of English. Computational Linguistics, 18(1):31-40.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The Evolution of Morphology", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Carstairs-Mccarthy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Carstairs-McCarthy. 2010. The Evolution of Morphology, volume 14. Oxford University Press.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Approximating discrete probability distributions with dependence trees", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Chow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cong", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1968, |
|
"venue": "IEEE Transactions on Information Theory", |
|
"volume": "14", |
|
"issue": "3", |
|
"pages": "462--467", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. K. Chow and Cong N. Liu. 1968. Approx- imating discrete probability distributions with dependence trees. IEEE Transactions on Infor- mation Theory, 14(3):462-467.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "On the diachronic stability of irregularity in inflectional morphology", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mans", |
|
"middle": [], |
|
"last": "Hulden", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.08262v1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Christo Kirov, Mans Hulden, and Jason Eisner. 2018a. On the diachronic stability of irregularity in inflectional morphology. arXiv preprint arXiv:1804.08262v1.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Jason Eisner, and Mans Hulden. 2018b. The CoNLL-SIGMORPHON 2018 shared task: Universal morphological reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u0117raldine", |
|
"middle": [], |
|
"last": "Walther", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Vylomova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arya", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Mielke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Garrett", |
|
"middle": [], |
|
"last": "Nicolai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miikka", |
|
"middle": [], |
|
"last": "Silfverberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the CoNLL SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G\u0117raldine Walther, Ekaterina Vylomova, Arya D. McCarthy, Katharina Kann, Sebastian Mielke, Garrett Nicolai, Miikka Silfverberg, David Yarowsky, Jason Eisner, and Mans Hulden. 2018b. The CoNLL-SIGMORPHON 2018 shared task: Universal morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection, pages 1-27.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Jason Eisner, and Mans Hulden. 2017a. The CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages", |
|
"authors": [ |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Ryancotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e9raldine", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Walther", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Vylomova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manaal", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Faruqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "K\u00fcbler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the CoNLL-SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "RyanCotterell, Christo Kirov, John Sylak-Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra K\u00fcbler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017a. The CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages. In Proceedings of the CoNLL-SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, Vancouver.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The SIGMORPHON 2016 shared task-morphological reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "10--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak- Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task-morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Pho- netics, Phonology, and Morphology, pages 10-22, Berlin.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Stochastic contextual edit distance and probabilistic FSTs", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nanyun", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "625--630", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2014. Stochastic contextual edit distance and probabilistic FSTs. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 625-630. Baltimore, MD.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Modeling word forms using latent underlying morphs and phonology", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nanyun", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3433--447", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2015. Modeling word forms using latent under- lying morphs and phonology. Transactions of the Association for Computational Linguistics (TACL), 3433-447.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Neural graphical models over strings for principal parts morphological paradigm completion", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "759--765", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, John Sylak-Glassman, and Christo Kirov. 2017b. Neural graphical models over strings for principal parts morphological para- digm completion. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 759-765, Valencia.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Paradigm completion for derivational morphology", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Vylomova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huda", |
|
"middle": [], |
|
"last": "Khayrallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "725--731", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Ekaterina Vylomova, Huda Khayrallah, Christo Kirov, and David Yarowsky. 2017c. Paradigm completion for derivational morphology. In Proceedings of the Confer- ence on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 725-731, Copenhagen.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Graphical models over multiple strings", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dreyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "101--110", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Dreyer and Jason Eisner. 2009. Graphical models over multiple strings. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 101-110. Singapore.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Discovering morphological paradigms from plain text using a Dirichlet process mixture model", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dreyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "616--627", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Dreyer and Jason Eisner. 2011. Dis- covering morphological paradigms from plain text using a Dirichlet process mixture model. In Proceedings of the Conference on Empir- ical Methods in Natural Language Processing (EMNLP), pages 616-627, Edinburgh.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Latent-variable modeling of string transductions with finite-state methods", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dreyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1080--1089", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Dreyer, Jason Smith, and Jason Eis- ner. 2008, October. Latent-variable modeling of string transductions with finite-state meth- ods. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 1080-1089, Honolulu, HI.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Journal of Research of the", |
|
"authors": [], |
|
"year": 1967, |
|
"venue": "National Bureau of Standards B", |
|
"volume": "71", |
|
"issue": "4", |
|
"pages": "233--240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jack Edmonds. 1967. Optimum branchings. Jour- nal of Research of the National Bureau of Standards B, 71(4):233-240.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "The structure of Riau Indonesian", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Gil", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Nordic Journal of Linguistics", |
|
"volume": "17", |
|
"issue": "2", |
|
"pages": "179--200", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Gil. 1994. The structure of Riau Indonesian. Nordic Journal of Linguistics, 17(2):179-200.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Learning and morphological change", |
|
"authors": [ |
|
{ |
|
"first": "Mary", |
|
"middle": [], |
|
"last": "Hare", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Elman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Cognition", |
|
"volume": "56", |
|
"issue": "1", |
|
"pages": "61--98", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mary Hare and Jeffrey L. Elman. 1995. Learn- ing and morphological change. Cognition, 56(1):61-98.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "A Course In Modern Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Charles", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Hockett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1958, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles F. Hockett. 1958. A Course In Modern Linguistics. The MacMillan Company.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Single-model encoder-decoder with explicit morphological representation for reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "555--560", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016. Single-model encoder-decoder with explicit morphological representation for reinflection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 555-560, Berlin.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Phonology in Generative Grammar", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kenstowicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael J. Kenstowicz. 1994. Phonology in Generative Grammar. Blackwell Oxford.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Archi (Caucasian -Daghestanian)", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Aleksandr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kibrik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "The Handbook of Morphology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "455--476", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aleksandr E. Kibrik. 1998, Archi (Caucasian - Daghestanian). In The Handbook of Morphol- ogy, pages 455-476, Blackwell Oxford.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "UniMorph 2.0: Universal morphology", |
|
"authors": [ |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e9raldine", |
|
"middle": [], |
|
"last": "Walther", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Vylomova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manaal", |
|
"middle": [], |
|
"last": "Faruqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Mielke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arya", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "K\u00fcbler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of LREC 2018", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christo Kirov, Ryan Cotterell, John Sylak- Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sebastian Mielke, Arya D. McCarthy, Sandra K\u00fcbler, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. UniMorph 2.0: Universal morphology. In Proceedings of LREC 2018, Miyazaki, Japan. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "OpenNMT: Open-source toolkit for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuntian", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Senellart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of ACL 2017, System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A logical calculus of the ideas immanent in nervous activity", |
|
"authors": [ |
|
{ |
|
"first": "Warren", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Mcculloch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Pitts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1943, |
|
"venue": "Bulletin of Mathematical Biophysics", |
|
"volume": "5", |
|
"issue": "4", |
|
"pages": "115--133", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Warren S. McCulloch and Walter Pitts. 1943. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5(4):115-133.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "The world's simplest grammars are creole grammars", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Mcwhorter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Linguistic Typology", |
|
"volume": "5", |
|
"issue": "2", |
|
"pages": "125--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John McWhorter. 2001. The world's simplest grammars are creole grammars. Linguistic Typol- ogy, 5(2):125-66.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Linguistic Complexity and Information: Quantitative Approaches", |
|
"authors": [ |
|
{ |
|
"first": "Oh", |
|
"middle": [], |
|
"last": "Yoon Mi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Mi Oh. 2015. Linguistic Complexity and Information: Quantitative Approaches. Ph.D. thesis, Universit\u00e9 de Lyon, France.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Probabilistic Automata", |
|
"authors": [ |
|
{ |
|
"first": "Azaria", |
|
"middle": [], |
|
"last": "Paz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Azaria Paz. 2003. Probabilistic Automata, John Wiley and Sons.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "A cross-language perspective on speech information rate", |
|
"authors": [ |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Pellegrino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christophe", |
|
"middle": [], |
|
"last": "Coup\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Egidio", |
|
"middle": [], |
|
"last": "Marsico", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Language", |
|
"volume": "87", |
|
"issue": "3", |
|
"pages": "539--558", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fran\u00e7ois Pellegrino, Christophe Coup\u00e9, and Egidio Marsico. 2011. A cross-language perspective on speech information rate. Language, 87(3):539-558.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "On language and connectionism: Analysis of a parallel distributed processing model of language acquisition", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Pinker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Prince", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Cognition", |
|
"volume": "28", |
|
"issue": "1", |
|
"pages": "73--193", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Pinker and Alan Prince. 1988. On language and connectionism: Analysis of a parallel distributed processing model of language acqui- sition. Cognition, 28(1):73-193.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Feature representations and feature-passing operations in Greek nominal inflection", |
|
"authors": [ |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Ralli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the 8th Symposium on English and Greek Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angela Ralli. 1994. Feature representations and feature-passing operations in Greek nominal inflection. In Proceedings of the 8th Symposium on English and Greek Linguistics, pages 19-46.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "The role of morphology in gender determination: evidence from Modern Greek", |
|
"authors": [ |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Ralli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Linguistics", |
|
"volume": "40", |
|
"issue": "3", |
|
"pages": "519--552", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angela Ralli. 2002. The role of morphology in gender determination: evidence from Modern Greek. Linguistics, 40(3; ISSU 379):519-552.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Language acquisition in the MDL framework", |
|
"authors": [ |
|
{ |
|
"first": "Jorma", |
|
"middle": [], |
|
"last": "Rissanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Ristad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jorma Rissanen and Eric S. Ristad. 1994. Lan- guage acquisition in the MDL framework. In Eric S. Ristad, editor, Language Computation. American Mathematical Society, Philadelphia, PA.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "A Short History of Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Henry Robins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Henry Robins. 2013. A Short History of Linguistics. Routledge.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Comparing complexity measures", |
|
"authors": [ |
|
{ |
|
"first": "Beno\u00eet", |
|
"middle": [], |
|
"last": "Sagot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computational Approaches to Morphological Complexity", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beno\u00eet Sagot. 2013. Comparing complexity mea- sures. In Computational Approaches to Mor- phological Complexity, Paris.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Language: An Introduction to the Study of Speech", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Sapir", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1921, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Sapir. 1921. Language: An Introduction to the Study of Speech.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "A mathematical theory of communication", |
|
"authors": [ |
|
{ |
|
"first": "Claude", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Shannon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1948, |
|
"venue": "Bell Systems Technical Journal", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Claude E. Shannon. 1948. A mathematical theory of communication. Bell Systems Technical Journal, 27.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Turing computability with neural nets", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduardo", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Siegelmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sontag", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Applied Mathematics Letters", |
|
"volume": "4", |
|
"issue": "6", |
|
"pages": "77--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hava T. Siegelmann and Eduardo D. Sontag. 1991. Turing computability with neural nets. Applied Mathematics Letters, 4(6):77-80.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "On the computational power of neural nets", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduardo", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Siegelmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sontag", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Journal of Computer and System Sciences", |
|
"volume": "50", |
|
"issue": "1", |
|
"pages": "132--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hava T. Siegelmann and Eduardo D. Sontag. 1995. On the computational power of neural nets. Journal of Computer and System Sciences, 50(1):132-150.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Cultural transmission and the evolution of human behaviour", |
|
"authors": [ |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lewandowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Philosophical Transactions B", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1098/rstb.2008.0147" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Griffiths, and Stephan Lewandowsky. 2008. Cultural transmission and the evolution of human behaviour. Philosophical Transactions B. doi.org/10.1098/rstb.2008.0147.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "The Cognitive Sciences: An Interdisciplinary Approach", |
|
"authors": [ |
|
{ |
|
"first": "Carolyn", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Sobel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carolyn P. Sobel and Paul Li. 2013. The Cognitive Sciences: An Interdisciplinary Approach. Sage Publications.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "Morphological Theory: An Introduction to Word Structure in Generative Grammar", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Spencer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Spencer. 1991. Morphological Theory: An Introduction to Word Structure in Gener- ative Grammar. Wiley-Blackwell.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "Irregularity in Morphology (and beyond)", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Stolz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hitomi", |
|
"middle": [], |
|
"last": "Otsuka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aina", |
|
"middle": [], |
|
"last": "Urdze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Van Der Auwera", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Stolz, Hitomi Otsuka, Aina Urdze, and Johan van der Auwera. 2012. Irregularity in Morphology (and beyond), volume 11. Walter de Gruyter.", |
|
"links": null |
|
}, |
|
"BIBREF57": { |
|
"ref_id": "b57", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Infor- mation Processing Systems (NIPS), pages 3104-3112.", |
|
"links": null |
|
}, |
|
"BIBREF58": { |
|
"ref_id": "b58", |
|
"title": "The composition and use of the universal morphological feature schema (Unimorph schema)", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Sylak-Glassman. 2016, The composition and use of the universal morphological feature schema (Unimorph schema). Johns Hopkins University.", |
|
"links": null |
|
}, |
|
"BIBREF59": { |
|
"ref_id": "b59", |
|
"title": "A language-independent feature schema for inflectional morphology", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Que", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "674--680", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Sylak-Glassman, Christo Kirov, David Yarowsky, and Roger Que. 2015, July. A language-independent feature schema for inflectional morphology. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL), pages 674-680, Beijing.", |
|
"links": null |
|
}, |
|
"BIBREF60": { |
|
"ref_id": "b60", |
|
"title": "ADADELTA: An adaptive learning rate method", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zeiler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1212.5701v1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: An adaptive learning rate method. arXiv preprint arXiv:1212.5701v1.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "s (simplified) Modern Greek example from Ackerman and Malouf (2013). The conditional distribution r(m.gen;sg | m.acc;pl = . . . -i) over genitive singular forms is peaked because there is exactly one possible transformation: Substituting -us for -i. Other conditional distributions for Modern Greek are less peaked: Ackerman and Malouf (2013) estimated that r(m.nom;sg | m.acc;pl = .", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "using a trained model q and a held-out test set, we follow \u00a73.3 by estimating all \u2212 log p(\u2022 \u2022 \u2022 ) terms in the entropies with our model surprisals \u2212 log q(\u2022 \u2022 \u2022 ), but using the empirical probabilities on the test set for all other p(\u2022 \u2022 \u2022 ) terms including p(S = s). Suppose the test set paradigms are m 1 , . . . , m N with shapes s 1 , . . . , s N respectively. Then taking q = q \u03b8 , our final estimate of the i-complexity (8) works out to" |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Structuralist analysis of Modern Greek nominal inflection classes" |
|
} |
|
} |
|
} |
|
} |