Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D13-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:40:53.788853Z"
},
"title": "Adaptor Grammars for Learning Non-Concatenative Morphology",
"authors": [
{
"first": "Jan",
"middle": [
"A"
],
"last": "Botha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oxford Oxford",
"location": {
"postCode": "OX1 3QD",
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oxford Oxford",
"location": {
"postCode": "OX1 3QD",
"country": "UK"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper contributes an approach for expressing non-concatenative morphological phenomena, such as stem derivation in Semitic languages, in terms of a mildly context-sensitive grammar formalism. This offers a convenient level of modelling abstraction while remaining computationally tractable. The nonparametric Bayesian framework of adaptor grammars is extended to this richer grammar formalism to propose a probabilistic model that can learn word segmentation and morpheme lexicons, including ones with discontiguous strings as elements, from unannotated data. Our experiments on Hebrew and three variants of Arabic data find that the additional expressiveness to capture roots and templates as atomic units improves the quality of concatenative segmentation and stem identification. We obtain 74% accuracy in identifying triliteral Hebrew roots, while performing morphological segmentation with an F1-score of 78.1.",
"pdf_parse": {
"paper_id": "D13-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper contributes an approach for expressing non-concatenative morphological phenomena, such as stem derivation in Semitic languages, in terms of a mildly context-sensitive grammar formalism. This offers a convenient level of modelling abstraction while remaining computationally tractable. The nonparametric Bayesian framework of adaptor grammars is extended to this richer grammar formalism to propose a probabilistic model that can learn word segmentation and morpheme lexicons, including ones with discontiguous strings as elements, from unannotated data. Our experiments on Hebrew and three variants of Arabic data find that the additional expressiveness to capture roots and templates as atomic units improves the quality of concatenative segmentation and stem identification. We obtain 74% accuracy in identifying triliteral Hebrew roots, while performing morphological segmentation with an F1-score of 78.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Unsupervised learning of morphology is the task of acquiring, from unannotated data, the intra-word building blocks of a language and the rules by which they combine to form words. This task is of interest both as a gateway for studying language acquisition in humans and as a way of producing morphological analyses that are of practical use in a variety of natural language processing tasks, including machine translation, parsing and information retrieval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A particularly interesting version of the morphology learning problem comes from languages that use templatic morphology, such as Arabic, Hebrew and Amharic. These Semitic languages derive verb and noun stems by interspersing abstract root morphemes into templatic structures in a nonconcatenative way. For example, the Arabic root k\u2022t\u2022b can combine with the template (i-a) to derive the noun stem kitab (book). Established morphological analysers typically ignore this process and simply view the derived stems as elementary units (Buckwalter, 2002) , or their account of it coincides with a requirement for extensive linguistic knowledge and hand-crafting of rules (Finkel and Stump, 2002; Schneider, 2010; Altantawy et al., 2010) . The former approach is bound to suffer from vocabulary coverage issues, while the latter clearly does not transfer easily across languages. The practical appeal of unsupervised learning of templatic morphology is that it can overcome these shortcomings.",
"cite_spans": [
{
"start": 532,
"end": 550,
"text": "(Buckwalter, 2002)",
"ref_id": "BIBREF5"
},
{
"start": 667,
"end": 691,
"text": "(Finkel and Stump, 2002;",
"ref_id": "BIBREF15"
},
{
"start": 692,
"end": 708,
"text": "Schneider, 2010;",
"ref_id": "BIBREF37"
},
{
"start": 709,
"end": 732,
"text": "Altantawy et al., 2010)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unsupervised learning of concatenative morphology has received extensive attention, partly driven by the MorphoChallenge (Kurimo et al., 2010) in recent years, but that is not the case for root-templatic morphology (Hammarstr\u00f6m and Borin, 2011) .",
"cite_spans": [
{
"start": 121,
"end": 142,
"text": "(Kurimo et al., 2010)",
"ref_id": "BIBREF30"
},
{
"start": 215,
"end": 244,
"text": "(Hammarstr\u00f6m and Borin, 2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present a model-based method that learns concatenative and root-templatic morphology in a unified framework. We build on two disparate strands of work from the literature: Firstly, we apply simple Range Concatenating Grammars (SRCGs) (Boullier, 2000) to parse contiguous and discontiguous morphemes from an input string. These grammars are mildly-context sensitive (Joshi, 1985) , a superset of context-free grammars that retains polynomial parsing time-complexity. Secondly, we generalise the nonparametric Bayesian learning framework of adaptor grammars (Johnson et al., 2007) to SRCGs. 1 This should also be rel-evant to other applications of probabilistic SRCGs, e.g. in parsing (Maier, 2010) , translation (Kaeshammer, 2013 ) and genetics (Kato et al., 2006) .",
"cite_spans": [
{
"start": 251,
"end": 267,
"text": "(Boullier, 2000)",
"ref_id": "BIBREF4"
},
{
"start": 382,
"end": 395,
"text": "(Joshi, 1985)",
"ref_id": "BIBREF25"
},
{
"start": 573,
"end": 595,
"text": "(Johnson et al., 2007)",
"ref_id": "BIBREF23"
},
{
"start": 700,
"end": 713,
"text": "(Maier, 2010)",
"ref_id": "BIBREF32"
},
{
"start": 728,
"end": 745,
"text": "(Kaeshammer, 2013",
"ref_id": "BIBREF26"
},
{
"start": 761,
"end": 780,
"text": "(Kato et al., 2006)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition to unannotated data, our method requires as input a minimal set of high-level grammar rules that encode basic intuitions of the morphology. This is where there would be room to become very language specific. Our aim, however, is not to obtain a best-published result in a particular language, but rather to create a method that is applicable across a variety of morphological processes. The specific rules used in our empirical evaluation on Arabic and Hebrew therefore contain hardly any explicit linguistic knowledge about the languages and are applicable across the family of Semitic languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Concatenative morphology lends itself well to an analysis in terms of finite-state transducers (FSTs) (Koskenniemi, 1984) . With some additional effort, FSTs can also encode non-concatenative morphology (Kiraz, 2000; Beesley and Karttunen, 2003; Cohen-Sygal and Wintner, 2006; Gasser, 2009) . Despite this seeming adequacy of regular languages to describe morphology, we see two main shortcomings that motivate moving further up the Chomsky hierarchy of formal languages: first is the issue of learning. We are not aware of successful attempts at inducing FST-based morphological analysers in an unsupervised way, and believe the challenge lies in the fact that FSTs do not offer a convenient way of expressing prior linguistic intuitions to guide the learning process. Secondly, an FST composed of multiple machines might capture morphological processes well and excel at analysis, but interpretability of its internal operations are limited.",
"cite_spans": [
{
"start": 102,
"end": 121,
"text": "(Koskenniemi, 1984)",
"ref_id": "BIBREF29"
},
{
"start": 203,
"end": 216,
"text": "(Kiraz, 2000;",
"ref_id": "BIBREF28"
},
{
"start": 217,
"end": 245,
"text": "Beesley and Karttunen, 2003;",
"ref_id": "BIBREF2"
},
{
"start": 246,
"end": 276,
"text": "Cohen-Sygal and Wintner, 2006;",
"ref_id": "BIBREF7"
},
{
"start": 277,
"end": 290,
"text": "Gasser, 2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A powerful grammar for morphology",
"sec_num": "2"
},
{
"text": "These shortcomings are overcome for concatenative morphology by context-free adaptor grammars, which allowed diverse segmentation models to be formulated and investigated within a single framework (Johnson et al., 2007; Johnson, 2008; Sirts and Goldwater, 2013) . In principle, that covers a wide range of phenomena (typical example language in parentheses): affixal inflection (Czech) and derivation (English), agglutinative derivation (Turkish, Finnish), compounding (German). Our agenda here is to extend that approach to include non-concatenative processes such as root-templatic derivation (Arabic), infixation (Tagalog) and circumfixation (Indonesian). In this pursuit, an abstraction that permits discontiguous constituents is a highly useful modelling tool, but requires looking beyond context-free grammars.",
"cite_spans": [
{
"start": 197,
"end": 219,
"text": "(Johnson et al., 2007;",
"ref_id": "BIBREF23"
},
{
"start": 220,
"end": 234,
"text": "Johnson, 2008;",
"ref_id": "BIBREF24"
},
{
"start": 235,
"end": 261,
"text": "Sirts and Goldwater, 2013)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A powerful grammar for morphology",
"sec_num": "2"
},
{
"text": "An idealised generative grammar that would capture all the aforementioned phenomena could look like this:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A powerful grammar for morphology",
"sec_num": "2"
},
{
"text": "Word \u2192 (Pre * Stem Suf * ) +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A powerful grammar for morphology",
"sec_num": "2"
},
{
"text": "(1) e.g. English un+accept+able",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A powerful grammar for morphology",
"sec_num": "2"
},
{
"text": "Stem | Pre | Suf \u2192 Morph (2) Stem \u2192 intercal (Root, Template) (3) e.g. Arabic derivation k\u2022t\u2022b + i\u2022a \u21d2 kitab (book) Stem \u2192 infix (Stem, Infix) (4) e.g. Tagalog sulat (write) \u21d2 sumulat (wrote) Stem \u2192 circfix (Stem, Circumfix) (5) e.g. Indonesian percaya (to trust) \u21d2 kepercayaan (belief)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A powerful grammar for morphology",
"sec_num": "2"
},
{
"text": "where the symbols (excluding Word and Stem) implicitly expand to the relevant terminal strings. The bold-faced \"functions\" combine the potentially discontiguous yields of the argument symbols into single contiguous strings, e.g. infix(s\u2022ulat, um) produces stem sumulat.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A powerful grammar for morphology",
"sec_num": "2"
},
{
"text": "Taken by themselves, the first two rules are simply a CFG that describes word formation as the concatenation of stems and affixes, a formulation that matches the underlying grammar of Morfessor (Creutz and Lagus, 2007) , a well-studied unsupervised model.",
"cite_spans": [
{
"start": 194,
"end": 218,
"text": "(Creutz and Lagus, 2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A powerful grammar for morphology",
"sec_num": "2"
},
{
"text": "The key aim of our extension is that we want the grammar to capture a discontiguous string like k\u2022t\u2022b as a single constituent in a parse tree. This leads to well-understood problems in probabilistic grammars (e.g. what is this rule's probability?), but also corresponds to the linguistic consideration that k\u2022t\u2022b is a proper morpheme of the language (Prunet, 2006) .",
"cite_spans": [
{
"start": 350,
"end": 364,
"text": "(Prunet, 2006)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A powerful grammar for morphology",
"sec_num": "2"
},
{
"text": "In this section we define SRCGs formally and illustrate how they can be used to model nonconcatenative morphology. SRCGs define languages that are recognisable in polynomial time, yet can capture discontiguous elements of a string under a single category (Boullier, 2000) . An SRCG-rule operates on vectors of ranges in contrast to the way a CFG-rule operates on single ranges (spans). In other words, a non-terminal symbol in an SRCG (CFG) derivation can dominate a subset (substring) of terminals in an input string.",
"cite_spans": [
{
"start": 255,
"end": 271,
"text": "(Boullier, 2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simple range concatenating grammars",
"sec_num": "3"
},
{
"text": "An SRCG G is a tuple (N, T, V, P, S), with finite sets of non-terminals (N ), terminals (T ) and variables (V ), with a start symbol S \u2208 N . A rewrite rule p \u2208 P of rank r = \u03c1(p) \u2265 0 has the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalism",
"sec_num": "3.1"
},
{
"text": "A(\u03b1 1 , . . . , \u03b1 \u03c8(A) ) \u2192 B 1 (\u03b2 1,1 , . . . , \u03b2 1,\u03c8(B 1 ) ) . . . B r (\u03b2 r,1 , . . . , \u03b2 r,\u03c8(Br) ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalism",
"sec_num": "3.1"
},
{
"text": "where each \u03b1, \u03b2 \u2208 (T \u222a V ) * , and \u03c8(A) is the number of arguments a non-terminal A has, called its arity. By definition, the start symbol has arity 1. Any variable v \u2208 V appearing in a given rule must be used exactly once on each side of the rule. Terminating rules are written with as the right-hand side and thus have rank 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalism",
"sec_num": "3.1"
},
{
"text": "A range is a pair of integers (i, j) denoting the substring w i+1 . . . w j of a string w = w 1 . . . w n . A non-terminal becomes instantiated when its variables are bound to ranges through substitution. Variables within an argument imply concatenation and therefore have to bind to adjacent ranges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalism",
"sec_num": "3.1"
},
{
"text": "An instantiated non-terminal A is said to derive if the consecutive application of a sequence of instantiated rules rewrite it as . A string w is within the language defined by a particular SRCG iff the start symbol S, instantiated with the exhaustive range (0, w n ), derives .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalism",
"sec_num": "3.1"
},
{
"text": "An important distinction with regard to CFGs is that, due to the instantiation mechanism, the ordering of non-terminals on the right-hand side of an SRCG rule is irrelevant, i.e. A(ab) \u2192 B(a)C(b) and A(ab) \u2192 C(b)B(a) are the same rule. 2 Consequently, the isomorphisms of any given SRCG derivation tree all encode the same string, which is uniquely defined through the instantiation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalism",
"sec_num": "3.1"
},
{
"text": "A fragment of the idealised grammar schema from the previous section ( \u00a72) can be rephrased as an SRCG by writing the rules in the newly introduced 2 Certain ordering restrictions over the variables within an argument need to hold for an SRCG to indeed be a simple RCG (Boullier, 2000) . notation, and supplying a definition of the intercal function as simply another rule of the grammar, with instantiation for w = kitab shown below:",
"cite_spans": [
{
"start": 269,
"end": 285,
"text": "(Boullier, 2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Application to morphological analysis",
"sec_num": "3.2"
},
{
"text": "Word(wakitabi) Suf(i) i Stm(kitab) Template(i,a) Root(k,t,b) Pre(wa) a w k i t a b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application to morphological analysis",
"sec_num": "3.2"
},
{
"text": "Word(abc) \u2192 Pre(a) Stem(b) Suf(c) Stem(abcde) \u2192 Root(a, c, e) Template(b, d), Stem( 0..1 , 1..2 , 2..3 , 3..4 , 4..5 ) \u2192 Root( 0..1 , 2..3 , 4..5 ) Template( 1..2 , 3..4 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application to morphological analysis",
"sec_num": "3.2"
},
{
"text": "Given an appropriate set of grammar rules (as we present in \u00a75), we can parse an input string to obtain a tree as shown in Figure 1 . The overlapping branches of the tree demonstrate that this grammar captures something a CFG could not. From the parse tree one can read off the word's root morpheme and the template used.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 131,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Application to morphological analysis",
"sec_num": "3.2"
},
{
"text": "Although SRCGs specify mildly context-sensitive grammars, each step in a derivation is context-freea node's expansion does not depend on other parts of the tree. This property implies that a recognition/parsing algorithm can have a worst-case time complexity that is polynomial in the input length n, O(n (\u03c1+1)\u03c8 ) for arity \u03c8 and rank \u03c1, which reduces to O(n 3\u03c8 ) for a binarised grammar. To capture the maximal case of a root with k \u2212 1 characters and k discontiguous templatic characters forming a stem would require a grammar that has arity \u03c8 = k. For Arabic, which has up to quadriliteral roots (k = 5), the time complexity would be O(n 15 ). 3 This is a daunting proposition for parsing, but we are careful to set up our application of SRCGs in such a way that this is not too big an obstacle:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application to morphological analysis",
"sec_num": "3.2"
},
{
"text": "Firstly, our grammars are defined over the characters that make up a word, and not over words that make up a sentence. As such, the input length n would tend to be shorter than when parsing full sentences from a corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application to morphological analysis",
"sec_num": "3.2"
},
{
"text": "Secondly, we do type-based morphological analysis, a view supported by evidence from Goldwater et al. (2006) , so each unique word in a dataset is only ever parsed once with a given grammar. The set of word types attested in the data sources of interest here is fairly limited, typically in the tens of thousands. For these reasons, our parsing and inference tasks turn out to be tractable despite the high time complexity.",
"cite_spans": [
{
"start": 85,
"end": 108,
"text": "Goldwater et al. (2006)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Application to morphological analysis",
"sec_num": "3.2"
},
{
"text": "The probabilistic extension of SRCGs is similar to the probabilistic extension of CFGs, and has been used in other guises (Kato et al., 2006; Maier, 2010) . Each rule r \u2208 P has an associated probability \u03b8 r such that r\u2208P A \u03b8 r = 1. A random string in the language of the grammar can then be obtained through a generative procedure that begins with the start symbol S and iteratively expands it until deriving : At each step for some current symbol A, a rewrite rule r is sampled randomly from P A in accordance with the distribution over rules and used to expand A. This procedure terminates when no further expansions are possible. Of course, expansions need to respect the range concatenating and ordering constraints imposed by the variables in rules. The expansions imply a chain of variable bindings going down the tree, and instantiation happens only when rewriting into s but then propagates back up the tree.",
"cite_spans": [
{
"start": 122,
"end": 141,
"text": "(Kato et al., 2006;",
"ref_id": "BIBREF27"
},
{
"start": 142,
"end": 154,
"text": "Maier, 2010)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic SRCG",
"sec_num": "4.1"
},
{
"text": "The probability P (w, t) of the resulting tree t and terminal string w is the product r \u03b8 r over the sequence of rewrite rules used. This generative procedure is a conceptual device; in practice, one would care about parsing some input string under this probabilistic grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic SRCG",
"sec_num": "4.1"
},
{
"text": "A central property of the generative procedure underlying probabilistic SRCGs is the fact that each expansion happens independently, both of the other expansions in the tree under construction and of any other trees. To some extent, this flies in the face of the reality of estimating a grammar from text, where one would expect certain sub-trees to be used repeatedly across different input strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PYSRCAG",
"sec_num": "4.2"
},
{
"text": "Adaptor grammars weaken this independence assumption by allowing whole subtrees to be reused during expansion. Informally, they act as a cache of tree fragments whose tendency to be reused during expansion is governed by the choice of adaptor function. Following earlier applications of adaptor grammars (Johnson et al., 2007; Huang et al., 2011) , we employ the Pitman-Yor process (Pitman, 1995; Pitman and Yor, 1997) PYSRCAG defines a generative process over a set of trees T . Unadapted non-terminals A \u2208 N \\ M are expanded as before ( \u00a74.1). For each adapted non-terminal A \u2208 M , a cache C A is maintained for storing the terminating tree fragments expanded from A earlier in the process, and we denote the fragment corresponding to the i-th expansion of A as z i . In other words, the sequence of indices z i is the assignment of a sequence of expansions of A to particular tree fragments. Given a cache C A that has n previously generated trees comprising m unique trees each used n 1 , . . . , n m times (where n = k n k ), the tree fragment for the next expansion of A, z n+1 , is sampled conditional on the previous assignments z < according to",
"cite_spans": [
{
"start": 304,
"end": 326,
"text": "(Johnson et al., 2007;",
"ref_id": "BIBREF23"
},
{
"start": 327,
"end": 346,
"text": "Huang et al., 2011)",
"ref_id": "BIBREF21"
},
{
"start": 382,
"end": 396,
"text": "(Pitman, 1995;",
"ref_id": "BIBREF34"
},
{
"start": 397,
"end": 418,
"text": "Pitman and Yor, 1997)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PYSRCAG",
"sec_num": "4.2"
},
{
"text": "z n+1 |z < \u223c n k \u2212a n+b if z n+1 = k \u2208 [1, m] ma+b n+b if z n+1 = m + 1,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PYSRCAG",
"sec_num": "4.2"
},
{
"text": "where a and b are those elements of a and b corresponding to A. The first case denotes the situation where a previously cached tree is reused for this n + 1-th expansion of A; to be clear, this expands A with a fully terminating tree fragment, meaning that none of the nodes descending from A in the tree being generated are subject to further expansion. The second case by-passes the cache and expands A according to the rules P A and rule probabilities \u03b8 A of the underlying SRCG G S . Other caches C B (B \u2208 M ) may come into play during those expansions of the descendants of A; thus a PYS-RCAG can define a hierarchical stochastic process. Both cases eventually result in a terminating treefragment for A, which is then added to the cache, updating the counts n, n zn+1 and potentially m.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PYSRCAG",
"sec_num": "4.2"
},
{
"text": "The adaptation does not affect the string language of G S , but it maps the distribution over trees to one that is distributed according to the PYP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PYSRCAG",
"sec_num": "4.2"
},
{
"text": "The invariance of SRCGs trees under isomorphism would make the probabilistic model deficient, but we side-step this issue by requiring that grammar rules are specified in a canonical way that ensures a one-to-one correspondence between the order of nodes in a tree and of terminals in the yield.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PYSRCAG",
"sec_num": "4.2"
},
{
"text": "The inference procedure under our model is very similar to that of CFG PY-adaptor grammars, so we restate the central aspects here but refer the reader to the original article by Johnson et al. (2007) for further details. First, one may integrate out the adaptors to obtain a single distribution over the set of trees generated from a particular non-terminal. Thus, the joint probability of a particular sequence z for the adapted non-terminal A with cached counts (n 1 , . . . , n m ) is",
"cite_spans": [
{
"start": 179,
"end": 200,
"text": "Johnson et al. (2007)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference under PYSRCAG",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P Y (z|a, b) = m k=1 (a(k \u2212 1) + b) n k \u22121 j=1 (j \u2212 a) n\u22121 i=0 (i + b) .",
"eq_num": "(6)"
}
],
"section": "Inference under PYSRCAG",
"sec_num": "4.3"
},
{
"text": "Taking all the adapted non-terminals into account, the joint probability of a set of full trees T under the grammar G is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference under PYSRCAG",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (T |a, b, \u03b1) = A\u2208M B(\u03b1 A + f A ) B(\u03b1 A ) P Y (z(T )|a, b),",
"eq_num": "(7)"
}
],
"section": "Inference under PYSRCAG",
"sec_num": "4.3"
},
{
"text": "where f A is a vector of the usage counts of rules r \u2208 P A across T , and B is the Euler beta function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference under PYSRCAG",
"sec_num": "4.3"
},
{
"text": "The posterior distribution over a set of strings w is obtained by marginalising (7) over all trees that have w as their yields. This is intractable to compute directly, so instead we use MCMC techniques to obtain samples from that posterior using a component-wise Metropolis-Hastings sampler. The sampler works by visiting each string w in turn and drawing a new tree for it under a proposal grammar G Q and randomly accepting that as the new analysis for w according to the Metropolis-Hastings acceptreject probability. As proposal grammar, we use the analogous approximation of our G as Johnson et al. used for PCFGs, namely by taking a static snapshot G Q of the adaptor grammar where additional rules rewrite adapted non-terminals as the terminal strings of their cached trees. Drawing a sample from the proposal distribution is then a matter of drawing a random tree from the parse chart of w under G Q .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference under PYSRCAG",
"sec_num": "4.3"
},
{
"text": "Lastly, the adaptor hyperparameters a and b are modelled by placing flat Beta(1, 1) and vague Gamma(10, 0.1) priors on them, respectively, and inferring their values using slice sampling (Johnson and Goldwater, 2009) .",
"cite_spans": [
{
"start": 187,
"end": 216,
"text": "(Johnson and Goldwater, 2009)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference under PYSRCAG",
"sec_num": "4.3"
},
{
"text": "We start with a CFG-based adaptor grammar 4 that models words as a stem and any number of prefixes and suffixes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling root-templatic morphology",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Word \u2192 Pre * Stem Suf * (8) Pre | Stem | Suf \u2192 Char +",
"eq_num": "(9)"
}
],
"section": "Modelling root-templatic morphology",
"sec_num": "5"
},
{
"text": "This fragment can be seen as building on the stemand-affix adaptor grammar presented in (Johnson et al., 2007) for morphological analysis of English, of which a later version also covers multiple affixes (Sirts and Goldwater, 2013) . In the particular case of Arabic, multiple affixes are required to handle the attachment of particles and proclitics onto base words.",
"cite_spans": [
{
"start": 88,
"end": 110,
"text": "(Johnson et al., 2007)",
"ref_id": "BIBREF23"
},
{
"start": 204,
"end": 231,
"text": "(Sirts and Goldwater, 2013)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling root-templatic morphology",
"sec_num": "5"
},
{
"text": "To extend this to complex stems consisting of a root with three radicals we have rules like the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling root-templatic morphology",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Stem(abcdef g) \u2192 R3(b, d, e) T4(a, c, e, g) (10) Stem(abcdef ) \u2192 R3(a, c, e) T3(b, d, f ) (11) Stem(abcde) \u2192 R3(a, c, e) T2(b, d) (12) Stem(abcd) \u2192 R3(a, c, d) T1(b) (13) Stem(abc) \u2192 R3(a, b, c)",
"eq_num": "(14)"
}
],
"section": "Modelling root-templatic morphology",
"sec_num": "5"
},
{
"text": "4 Adapted non-terminals are indicated by underlining and we use the following abbreviations: X \u2192 Y + means one or more instances of Y and encodes the rules X \u2192 Ys and Ys \u2192 Ys Y | Y. Similarly, X \u2192 Y * Z allows zero or more instances of Y and encodes the rules X \u2192 Z and X \u2192 Y + Z. Further relabelling is added as necessary to avoid cycles among adapted non-terminals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling root-templatic morphology",
"sec_num": "5"
},
{
"text": "The actual rules include certain permutations of these, e.g. rule (13) has a variant R3(a, b, d)T1(c).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling root-templatic morphology",
"sec_num": "5"
},
{
"text": "In unvocalised text, the standard written form of Modern Standard Arabic (MSA), it may happen that the stem and the root of a word form are one and the same. So while rule (14) may look trivial, it ensures that in such cases the radicals are still captured as descendants of the non-terminal category R3, thereby making their appearance in the cache.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling root-templatic morphology",
"sec_num": "5"
},
{
"text": "A discontiguous non-terminal An is rewritten through recursion on its arity down to 1, i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling root-templatic morphology",
"sec_num": "5"
},
{
"text": "An(v 1 , . . . , v n ) \u2192 Al(v 1 , . . . , v n\u22121 ) Char(v n ) with base case A1(v) \u2192 Char(v),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling root-templatic morphology",
"sec_num": "5"
},
{
"text": "where Char rewrites all individual terminals as , v i are variables and l = n\u22121. 5 Note that although we provide the model with two sets of discontiguous non-terminals R and T, we do not specify their mapping onto the actual terminal strings; no subdivision of the alphabet into vowels and consonants is hard-wired.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling root-templatic morphology",
"sec_num": "5"
},
{
"text": "We evaluate our model on standard Arabic, Quranic Arabic and Hebrew in terms of segmentation quality and lexicon induction ability. These languages share various properties, including morphology and lexical cognates, but are sufficiently different so as to require manual intervention when transferring rulebased morphological analysers across languages. A key question in this evaluation is therefore whether an appropriate instantiation of our model successfully generalises across related languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Our models are unsupervised and therefore learn from raw text, but their evaluation requires annotated data as a gold-standard and these were derived 6 as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets",
"sec_num": "6.1"
},
{
"text": "Arabic (MSA) We created the dataset BW by synthesising 50k morphotactically correct word types from the morpheme lexicons and consistency rules supplied with the Buckwalter Arabic Morphological 5 Including the arity as part of the non-terminal symbol names forms part of our convention here to ensure that the grammar contains no cycles, a situation which would complicate inference under PYSRCAG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets",
"sec_num": "6.1"
},
{
"text": "6 Our data preprocessing scripts are obtainable from http://github.com/bothameister/pysrcag-data. Analyser (BAMA). 7 This allowed control over the word shapes, which is important to focus the evaluation, while yielding reliable segmentation and root annotations. BW has no vocalisation; we denote the corresponding vocalised dataset as BW .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets",
"sec_num": "6.1"
},
{
"text": "Quranic Arabic We extracted the roughly 18k word types from a morphologically analysed version of the Quran (Dukes and Habash, 2010) . As an additional challenge, we left all given diacritics intact for this dataset, QU .",
"cite_spans": [
{
"start": 108,
"end": 132,
"text": "(Dukes and Habash, 2010)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets",
"sec_num": "6.1"
},
{
"text": "Hebrew We leveraged the Hebrew CHILDES database as an annotated resource (Albert et al., 2013) and were able to extract 5k word types that feature at least one affix to use as dataset HEB. The corrected versions of words marked as non-standard child language were used, diacritics were dropped, and we conflated stressed and unstressed vowels to overcome inconsistencies in the source data.",
"cite_spans": [
{
"start": 73,
"end": 94,
"text": "(Albert et al., 2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets",
"sec_num": "6.1"
},
{
"text": "We consider two classes of models. The first is the strictly context-free adaptor grammar for morphemes as sequences of characters using rules (8)-(9), which we denote as Concat and MConcat, where the latter allows multiple prefixes/suffixes in a word. These serve as baselines for the second class in which non-concatenative rules are added. MTpl and Tpl denote the canonical ver-sions with stems as shown in the set of rules above, and we experiment with a variant Tpl3Ch that allows the non-terminal T1 to be rewritten as up to three Char symbols, since the data indicate there are cases where multiple characters intervene between the radicals of a root. These models exclude rule (10), which we include only in the variant Tpl+T4. Lastly, TplR4 is the extension of Tpl+T4 to include a stem-forming rule that uses R4. As external baseline model we used Morfessor (Creutz and Lagus, 2007) , which performs decently in morphological segmentation of a variety of languages, but only handles concatenation.",
"cite_spans": [
{
"start": 867,
"end": 891,
"text": "(Creutz and Lagus, 2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "6.2"
},
{
"text": "The MCMC samplers converged within a few hundred iterations and we collected 100 posterior samples after 900 iterations of burn-in. Collected samples, each of which is a set of parse trees of the input word types, are used in two ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "6.3"
},
{
"text": "First, by averaging over the samples we can estimate the joint probability of a word type w and a parse tree t under the adaptor grammar, conditional on the data and the model's hyperparameters. We take the most probable parse of each word type and evaluate the implied segmentation against the gold standard segmentation. Likewise, we evaluate the implied lexicon of stems, affixes and roots against the corresponding reference sets. It should be emphasised that using this maximally probable analysis is aimed at simplifying the evaluation set-up; one could also extract multiple analyses of a word since the model defines a distribution over them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "6.3"
},
{
"text": "The second method abstracts away from individual word-types and instead averages over the union of all samples to obtain an estimate of the probability of a string s being generated by a certain category (non-terminal) of the grammar. In this way we can obtain a lexicon of the morphemes in each category, ranked by their probability under the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "6.3"
},
{
"text": "The quality of each induced lexicon is measured with standard set-based precision and recall with respect to the corresponding gold lexicon. The results are summarised by balanced F-scores in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 199,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inducing Morpheme Lexicons",
"sec_num": "6.4"
},
{
"text": "The main result is that all our models capable of forming complex stems obtain a marked improvement in F-scores over the baseline concatenative adaptor grammar, and the margin of improvement grows along with the expressivity of the complexstem models tested. This applies across prefix, stem and suffix categories and across our datasets, with the exception of QU , which we elaborate on in \u00a76.5. Stem lexicons of Arabic were learnt with relatively constant precision (\u223c70%), but modelling complex stems broadened the coverage by about 3000 stems over the concatenative model (against a reference set of 24k stems). On vocalised Arabic, the improvements for stems are along both dimensions. In contrast, affix lexicons for both BW and BW are noisy and the models all generate greedily to obtain near perfect recall but low precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Morpheme Lexicons",
"sec_num": "6.4"
},
{
"text": "On our Hebrew data, which comprises only 5k words, the gains in lexicon quality from modelling complex stems tend to be larger than on Arabic. This is consistent with our intuition that an appropriate, richer Bayesian prior helps overcome data sparsity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Morpheme Lexicons",
"sec_num": "6.4"
},
{
"text": "Extracting a lexicon of roots is rendered challenging by the unsupervised nature of the model as the labelling of grammar symbols is ultimately arbitrary. Our simple approach was to regard a character tuple parsed under category R3 as a root. This had mixed success, as demonstrated by the outlier scores in Table 2. In the one case where it was obvious that T3 had been been co-opted for the role, we report the F-score obtained on the union of R3 and T3 strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Morpheme Lexicons",
"sec_num": "6.4"
},
{
"text": "The preceding set-based evaluation imposes hard decisions about category membership. But adaptor grammars are probabilistic by definition and should thus also be evaluated in terms of probabilistic ability. One method is to turn the model predictions into a binary classifier of strings using Receiver-Operator-Characteristic (ROC) theory. We plot the true positive rate versus the false positive rate for each prediction lexicon L \u03c4 containing strings that have probability greater than \u03c4 under the model (for a grammar category of interest). A perfect classifier would rank all true positives (e.g. stem strings) above false positives (e.g. non-stem strings), corresponding to a curve in the upper left corner of the ROC plot. A random guesser would trace a diagonal line. The area under the curves (AUC) is the probability that the classifier would discriminate correctly. Our models with complex stem formation improve over the baseline on the AUC metric too. We include the ROC plots for Hebrew stem and root induction in Figure 2 , along with the roots the model was most confident about (Table 4) .",
"cite_spans": [],
"ref_spans": [
{
"start": 1027,
"end": 1035,
"text": "Figure 2",
"ref_id": "FIGREF3"
},
{
"start": 1094,
"end": 1103,
"text": "(Table 4)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Soft decisions",
"sec_num": null
},
{
"text": "In this section we turn to the analyses our models assign to each word type. Two aspects of interest are the segmentation into sequential morphemes and the identification of the root.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis per Word Type",
"sec_num": "6.5"
},
{
"text": "Our intercalating adaptor grammars consistently obtain large gains in segmentation accuracy over the baseline concatenative model, across all our datasets (Table 3) . We measure segmentation quality as segment border F1-score (SBF) (Sirts and Goldwater, 2013) , which is the F-score over word-internal segmentation points of the predicted analysis with respect to the gold segmentation.",
"cite_spans": [
{
"start": 232,
"end": 259,
"text": "(Sirts and Goldwater, 2013)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 155,
"end": 164,
"text": "(Table 3)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Morphological Analysis per Word Type",
"sec_num": "6.5"
},
{
"text": "Of the two MSA datasets, the vocalised version BW presents a more difficult segmentation task as its words are on average longer and feature 31k unique contiguous morphemes, compared to the 24k in BW for the same number of words. It should thus benefit more from additional model expressivity, as is reflected in the increase of 10 SBF when adding the TplR4 rule to the other triliteral ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis per Word Type",
"sec_num": "6.5"
},
{
"text": "The best triliteral root identification accuracy (on a per-word basis) was found for HEB (74%) and BW (67%). 8, 9 Refer to Figure 3 for example analyses.",
"cite_spans": [
{
"start": 109,
"end": 111,
"text": "8,",
"ref_id": null
},
{
"start": 112,
"end": 113,
"text": "9",
"ref_id": null
}
],
"ref_spans": [
{
"start": 123,
"end": 131,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Morphological Analysis per Word Type",
"sec_num": "6.5"
},
{
"text": "An interesting aspect of these results is that templatic rules may aid segmentation quality without necessarily giving perfect root identification. Modelling stem substructure allows any regularities that give rise to a higher data likelihood to be picked up.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis per Word Type",
"sec_num": "6.5"
},
{
"text": "The low performance on the Quran demands further explanation. All our adaptor grammars severely oversegmented this data, although the mistakes were not uniformly distributed. Most of the performance loss is on the 79% of words that have 1-2 morphemes. On the remaining words (having 3-5 morphemes), our models recover and approach the Morfessor baseline (MConcat: 32.7 , MTpl3Ch: 38.6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis per Word Type",
"sec_num": "6.5"
},
{
"text": "Preliminary experiments on BW had indicated that adaptation of (single) affix categories is crucial for good performance. Our multi-affixing models used on QU lacked a further level of adaptation for composite affixes, which we suspect as a contributing factor to the lower performance on that dataset. This remains to be confirmed in future experiments, but would be consistent with other observations on the role of hierarchical adaptation in adaptor grammars (Sirts and Goldwater, 2013) . The trend that intercalated rules improve segmentation (compared to the concatenative grammar) remains consistent across datasets, despite the lower absolute performance on QU .",
"cite_spans": [
{
"start": 462,
"end": 489,
"text": "(Sirts and Goldwater, 2013)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis per Word Type",
"sec_num": "6.5"
},
{
"text": "The performance of the Morfessor baseline was quite mixed. Contrary to our expectations, it performs best on the \"harder\" BW , worst on the arguably simpler HEB and struggled less than the adaptor grammars on QU .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis per Word Type",
"sec_num": "6.5"
},
{
"text": "One factor here is that it learns according to a grammar with multiple consecutive affixes and stems, whereas all our experiments (except on QU ) presupposed single affixes. This biases the evaluation slightly in our favour, but works in Morfessor's favour on the QU data which is annotated with multiple affixes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis per Word Type",
"sec_num": "6.5"
},
{
"text": "The distinctive feature of our morphological model is that it jointly addresses root identification and morpheme segmentation, and our results demonstrate the mutual benefit of this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "7"
},
{
"text": "In contrast, earlier unsupervised approaches tend to focus on these tasks in isolation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "7"
},
{
"text": "In unsupervised Arabic segmentation, the parametric Bayesian model of (Lee et al., 2011) achieves F1-scores in the high eighties by incorporating sentential context and inferred syntactic categories, both of which our model forgoes, although theirs has no account of discontiguous root morphemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "7"
},
{
"text": "Example instances Numbers indicate position when ranked by model probability. (G)ood and (B)ad instances from the corpus are given with morpheme boundaries marked: true positive (.), false negative ( ) and false positive ( x ). Hypothesised root characters are boldfaced, while accent (\u02c7) marks gold root characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Root",
"sec_num": null
},
{
"text": "Previous approaches to Arabic root identification that sought to use little supervision typically constrain the search space of candidate characters within a word, leveraging pre-existing dictionaries (Darwish, 2002; Boudlal et al., 2009) or rule constraints (Elghamry, 2005; Rodrigues and\u0106avar, 2007; Daya et al., 2008) .",
"cite_spans": [
{
"start": 201,
"end": 216,
"text": "(Darwish, 2002;",
"ref_id": "BIBREF9"
},
{
"start": 217,
"end": 238,
"text": "Boudlal et al., 2009)",
"ref_id": "BIBREF3"
},
{
"start": 259,
"end": 275,
"text": "(Elghamry, 2005;",
"ref_id": "BIBREF14"
},
{
"start": 276,
"end": 301,
"text": "Rodrigues and\u0106avar, 2007;",
"ref_id": null
},
{
"start": 302,
"end": 320,
"text": "Daya et al., 2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Root",
"sec_num": null
},
{
"text": "In contrast to these approaches, our model requires no dictionary, and while our grammar rules effect some constraints on what could be a root, they are specified in a convenient and flexible manner that Figure 3 : Parse trees produced for words in the two standard Arabic datasets that were incorrectly segmented by the baseline grammar. The templatic grammars correctly identified the triliteral and quadriliteral roots, also fixing the segmentation of (a). In (b), the templatic grammar improved over the baseline by finding the correct prefix but falsely posited a suffix. Unimportant subtrees are elided for space, while the yields of discontiguous constituents are indicated next to their symbols, with dots marking gaps. Crossing branches are not drawn but should be inferrable. Root characters are bold-faced in the reference analysis . The nonterminal X2 in (a) is part of a number of implementation-specific helper rules that ensure the appropriate handling of partly contiguous roots. makes experimentation with other phenomena easy.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 212,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Root",
"sec_num": null
},
{
"text": "X2 s t \u2022 r R3 s \u2022 t \u2022 r R1 r R2 s \u2022 t R1 t R1 s T2 > \u2022 A T1 A T1 > (a) Concat & Tpl+T4, \"wl>stArkm\" (BW) Word Stem n o t i l l A Pre l i d a li danotillA Pre(l i) ... Stem(danotill) ... Suf(A) T4 o \u2022 a \u2022 i \u2022 l T1 l T3 o \u2022 a \u2022 i T1 i T2 o \u2022 a T1 a T1 o R4 d \u2022 n \u2022 t \u2022 l R1 l R3 d \u2022 n \u2022 t R1 t R2 d \u2022 n R1 n R1 d (b) Concat & TplR4, \"lidanotillA\" (BW )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Root",
"sec_num": null
},
{
"text": "Recent work by Fullwood and O'Donnell (2013) goes some way toward jointly dealing with nonconcatenative and concatenative morphology in the unsupervised setting, but their focus is limited to inflected stems and does not handle multiple consecutive affixes. They analyse the Arabic verb stem (e.g. kataba \"he wrote\") into a templatic bit-string denoting root and non-root characters (e.g. r-r-r-) along with a root morpheme (e.g. ktb) and a so-called residue morpheme (e.g. aaa). Their nonparametric Bayesian model induces lexicons of these entities and achieves very high performance on templates. The explicit formulation of templates alleviates the labelling ambiguity that hampered our evaluation ( \u00a76.4), but we believe their method of analysis can be simulated in our framework using the appropriate SRCG-rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Root",
"sec_num": null
},
{
"text": "Learning root-templatic morphology is loosely related to morphological paradigm induction (Clark, 2001; Dreyer and Eisner, 2011; Durrett and DeNero, 2013) . Our models do not represent templatic paradigms explicitly, but it is interesting to note that preliminary experiments with German indicate that our adaptor grammars pick up on the past participle forming circumfix in ab+ge+spiel+t (played back).",
"cite_spans": [
{
"start": 90,
"end": 103,
"text": "(Clark, 2001;",
"ref_id": "BIBREF6"
},
{
"start": 104,
"end": 128,
"text": "Dreyer and Eisner, 2011;",
"ref_id": "BIBREF11"
},
{
"start": 129,
"end": 154,
"text": "Durrett and DeNero, 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Root",
"sec_num": null
},
{
"text": "We presented a new approach to modelling nonconcatenative phenomena in morphology using sim-ple range concatenating grammars and extended adaptor grammars to this formalism. Our experiments show that this richer model improves morphological segmentation and morpheme lexicon induction on different languages in the Semitic family.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Outlook",
"sec_num": "8"
},
{
"text": "Various avenues for future work present themselves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Outlook",
"sec_num": "8"
},
{
"text": "Firstly, the lightly-supervised, metagrammar approach to adaptor grammars (Sirts and Goldwater, 2013) can be extended to this more powerful formalism to lessen the burden of defining the \"right\" grammar rules by hand, and possibly boost performance. Secondly, the discontiguous constituents learnt with our framework can be used as features in other downstream applications. Especially in low-resource languages, the ability to model non-concatenative phenomena (e.g. circumfixing, ablaut, etc.) can play an important role in reducing data sparsity for tasks like word alignment and language modelling. Finally, the PYSRCAG presents another way of learning SRCGs in general, which can thus be employed in other applications of SRCGs, including syntactic parsing and translation.",
"cite_spans": [
{
"start": 74,
"end": 101,
"text": "(Sirts and Goldwater, 2013)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Outlook",
"sec_num": "8"
},
{
"text": "Our formulation is in terms of SRCGs, which are equivalent in power to linear context-free rewrite systems(Vijay- Shanker et al., 1987) and multiple context-free grammars(Seki et al., 1991), all of which are weaker than (non-simple) range concatenating grammars(Boullier, 2000).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The trade-off between arity and rank with respect to parsing complexity has been characterised(Gildea, 2010), and the appropriate refactoring may bring down the complexity for our grammars too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used version 2.0, LDC2004L02, and sampled word types having a single stem and at most one prefix, suffix or both, according to the following random procedure: Sample a shape (stem: 0.1, pre+stem: 0.25 stem+suf: 0.25, pre+stem+suf: 0.4). Sample uniformly at random (with replacement) a stem from the BAMA stem lexicon, and affix(es) from the ones consistent with the chosen stem. The BAMA lexicons contain affixes and their legitimate concatenations, so some of the generated words would permit a linguistic segmentation into multiple prefixes/suffixes. Nonetheless, we take as gold-standard segmentation precisely the items used by our procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "When excluding cases where root equals stem, root identification on BW is 55%. Those cases are still not trivial, since words without roots also exist.9 By way of comparison, Rodrigues and\u0106avar (2007) presented an unsupervised statistics-based root identification method that obtained precision ranging between 50-75%, the higher requiring vocalised words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their valuable comments. Our PYSRCAG implementation leveraged the adaptor grammar code released by Mark Johnson, whom we thank, along with the individuals who contributed to the public data sources that enabled the empirical elements of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Hebrew CHILDES corpus: transcription and morphological analysis. Language Resources and Evaluation",
"authors": [
{
"first": "Aviad",
"middle": [],
"last": "Albert",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Macwhinney",
"suffix": ""
},
{
"first": "Bracha",
"middle": [],
"last": "Nir",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aviad Albert, Brian MacWhinney, Bracha Nir, and Shuly Wintner. 2013. The Hebrew CHILDES corpus: tran- scription and morphological analysis. Language Re- sources and Evaluation, pages 1-33.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Morphological Analysis and Generation of Arabic Nouns: A Morphemic Functional Approach",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Altantawy",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Ibrahim",
"middle": [],
"last": "Saleh",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "851--858",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Altantawy, Nizar Habash, Owen Rambow, and Ibrahim Saleh. 2010. Morphological Analysis and Generation of Arabic Nouns: A Morphemic Func- tional Approach. In Proceedings of LREC, pages 851- 858.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Finite state morphology",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "Lauri",
"middle": [],
"last": "Beesley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Karttunen",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "18",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth R Beesley and Lauri Karttunen. 2003. Fi- nite state morphology, volume 18. CSLI publications Stanford.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Markovian approach for Arabic Root Extraction",
"authors": [
{
"first": "Abderrahim",
"middle": [],
"last": "Boudlal",
"suffix": ""
},
{
"first": "Rachid",
"middle": [],
"last": "Belahbib",
"suffix": ""
},
{
"first": "Abdelhak",
"middle": [],
"last": "Lakhouaja",
"suffix": ""
},
{
"first": "Azzeddine",
"middle": [],
"last": "Mazroui",
"suffix": ""
}
],
"year": 2009,
"venue": "The International Arab Journal of Information Technology",
"volume": "8",
"issue": "1",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abderrahim Boudlal, Rachid Belahbib, Abdelhak Lakhouaja, Azzeddine Mazroui, Abdelouafi Meziane, and Mohamed Bebah. 2009. A Markovian approach for Arabic Root Extraction. The International Arab Journal of Information Technology, 8(1):91-98.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A cubic time extension of contextfree grammars",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Boullier",
"suffix": ""
}
],
"year": 2000,
"venue": "Grammars",
"volume": "3",
"issue": "2-3",
"pages": "111--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Boullier. 2000. A cubic time extension of context- free grammars. Grammars, 3(2-3):111-131.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Arabic Morphological Analyzer",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Buckwalter",
"suffix": ""
}
],
"year": 2002,
"venue": "Linguistic Data Consortium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Buckwalter. 2002. Arabic Morphological Ana- lyzer. Technical report, Linguistic Data Consortium, Philedelphia.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning Morphology with Pair Hidden Markov Models",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the ACL Student Workshop",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Clark. 2001. Learning Morphology with Pair Hidden Markov Models. In Proceedings of the ACL Student Workshop, pages 55-60.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Finitestate registered automata for non-concatenative morphology",
"authors": [
{
"first": "Yael",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Sygal",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "1",
"pages": "49--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yael Cohen-Sygal and Shuly Wintner. 2006. Finite- state registered automata for non-concatenative mor- phology. Computational Linguistics, 32(1):49-82.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unsupervised models for morpheme segmentation and morphology learning",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2007,
"venue": "ACM Transactions on Speech and Language Processing",
"volume": "4",
"issue": "1",
"pages": "1--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. ACM Transactions on Speech and Language Processing, 4(1):1-34.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Building a shallow Arabic morphological analyzer in one day",
"authors": [
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages",
"volume": "",
"issue": "",
"pages": "47--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kareem Darwish. 2002. Building a shallow Arabic morphological analyzer in one day. In Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages, pages 47-54. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Identifying Semitic Roots: Machine Learning with Linguistic Constraints",
"authors": [
{
"first": "Ezra",
"middle": [],
"last": "Daya",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "3",
"pages": "429--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ezra Daya, Dan Roth, and Shuly Wintner. 2008. Identifying Semitic Roots: Machine Learning with Linguistic Constraints. Computational Linguistics, 34(3):429-448.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Discovering Morphological Paradigms from Plain Text Using a Dirichlet Process Mixture Model",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "616--627",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer and Jason Eisner. 2011. Discovering Morphological Paradigms from Plain Text Using a Dirichlet Process Mixture Model. In Proceedings of EMNLP, pages 616-627, Edinburgh, Scotland.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Morphological Annotation of Quranic Arabic",
"authors": [
{
"first": "Kais",
"middle": [],
"last": "Dukes",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kais Dukes and Nizar Habash. 2010. Morphological An- notation of Quranic Arabic. In Proceedings of LREC.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Supervised Learning of Complete Morphological Paradigms",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
}
],
"year": 2013,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1185--1195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and John DeNero. 2013. Supervised Learn- ing of Complete Morphological Paradigms. In Pro- ceedings of NAACL-HLT, pages 1185-1195, Atlanta, Georgia, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Constraint-based Algorithm for the Identification of Arabic Roots",
"authors": [
{
"first": "Khaled",
"middle": [],
"last": "Elghamry",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Midwest Computational Linguistics Colloquium. Indiana University",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khaled Elghamry. 2005. A Constraint-based Algorithm for the Identification of Arabic Roots. In Proceed- ings of the Midwest Computational Linguistics Collo- quium. Indiana University. Bloomington, IN.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Generating Hebrew verb morphology by default inheritance hierarchies",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Stump",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Finkel and Gregory Stump. 2002. Generating Hebrew verb morphology by default inheritance hier- archies. In Proceedings of the ACL Workshop on Com- putational Approaches to Semitic Languages. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning non-concatenative morphology",
"authors": [
{
"first": "Michelle",
"middle": [
"A"
],
"last": "Fullwood",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"J"
],
"last": "O'donnell",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "21--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michelle A. Fullwood and Timothy J. O'Donnell. 2013. Learning non-concatenative morphology. In Proceed- ings of the Workshop on Cognitive Modeling and Com- putational Linguistics, pages 21-27, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Semitic morphological analysis and generation using finite state transducers with feature structures",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Gasser",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "309--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Gasser. 2009. Semitic morphological analysis and generation using finite state transducers with fea- ture structures. In Proceedings of EACL, pages 309- 317. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Optimal Parsing Strategies for Linear Context-Free Rewriting Systems",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2010,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "769--776",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea. 2010. Optimal Parsing Strategies for Lin- ear Context-Free Rewriting Systems. In Proceedings of NAACL, pages 769-776. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Interpolating Between Types and Tokens by Estimating Power-Law Generators",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Advances in Neural Information Processing Systems",
"volume": "18",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark John- son. 2006. Interpolating Between Types and Tokens by Estimating Power-Law Generators. In Advances in Neural Information Processing Systems, Volume 18.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Unsupervised Learning of Morphology",
"authors": [
{
"first": "Harald",
"middle": [],
"last": "Hammarstr\u00f6m",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics",
"volume": "37",
"issue": "2",
"pages": "309--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harald Hammarstr\u00f6m and Lars Borin. 2011. Unsuper- vised Learning of Morphology. Computational Lin- guistics, 37(2):309-350.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Nonparametric Bayesian Machine Transliteration with Synchronous Adaptor Grammars",
"authors": [
{
"first": "Yun",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chew Lim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "534--539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yun Huang, Min Zhang, and Chew Lim Tan. 2011. Nonparametric Bayesian Machine Transliteration with Synchronous Adaptor Grammars. In Proceedings of ACL (Short papers), pages 534-539.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Improving nonparameteric Bayesian inference: Experiments on unsupervised word segmentation with adaptor grammars",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "317--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson and Sharon Goldwater. 2009. Improving nonparameteric Bayesian inference: Experiments on unsupervised word segmentation with adaptor gram- mars. In Proceedings of NAACL-HLT, pages 317-325. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Adaptor Grammars: A Framework for Specifying Compositional Nonparametric Bayesian Models",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2007,
"venue": "Advances in Neural Information Processing Systems",
"volume": "19",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Thomas L. Griffiths, and Sharon Gold- water. 2007. Adaptor Grammars: A Framework for Specifying Compositional Nonparametric Bayesian Models. In Advances in Neural Information Process- ing Systems, volume 19, page 641. MIT.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Unsupervised word segmentation for Sesotho using Adaptor Grammars",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL Special Interest Group on Computational Morphology and Phonology (SigMorPhon)",
"volume": "",
"issue": "",
"pages": "20--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 2008. Unsupervised word segmentation for Sesotho using Adaptor Grammars. In Proceedings of ACL Special Interest Group on Computational Mor- phology and Phonology (SigMorPhon), pages 20-27. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Tree adjoining grammars: How much context-sensitivity is required to provide reasonable structural descriptions?",
"authors": [
{
"first": "K",
"middle": [],
"last": "Aravind",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1985,
"venue": "Natural Language Parsing",
"volume": "",
"issue": "",
"pages": "206--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aravind K. Joshi. 1985. Tree adjoining grammars: How much context-sensitivity is required to provide reason- able structural descriptions? In D.R. Dowty, L. Kart- tunen, and A.M. Zwicky, editors, Natural Language Parsing, chapter 6, pages 206-250. Cambridge Uni- versity Press.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Synchronous Linear Context-Free Rewriting Systems for Machine Translation",
"authors": [
{
"first": "Miriam",
"middle": [],
"last": "Kaeshammer",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "68--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miriam Kaeshammer. 2013. Synchronous Linear Context-Free Rewriting Systems for Machine Trans- lation. In Proceedings of the Workshop on Syntax, Se- mantics and Structure in Statistical Translation, pages 68-77, Atlanta, Georgia. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Stochastic Multiple Context-Free Grammar for RNA Pseudoknot Modeling",
"authors": [
{
"first": "Yuki",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Seki",
"suffix": ""
},
{
"first": "Tadao",
"middle": [],
"last": "Kasami",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the International Workshop on Tree Adjoining Grammar and Related Formalisms",
"volume": "",
"issue": "",
"pages": "57--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuki Kato, Hiroyuki Seki, and Tadao Kasami. 2006. Stochastic Multiple Context-Free Grammar for RNA Pseudoknot Modeling. In Proceedings of the Inter- national Workshop on Tree Adjoining Grammar and Related Formalisms, pages 57-64.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Multitiered Nonlinear Morphology Using Multitape Finite Automata: A Case Study on Syriac and Arabic",
"authors": [
{
"first": "Kiraz",
"middle": [],
"last": "George Anton",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "1",
"pages": "77--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Anton Kiraz. 2000. Multitiered Nonlinear Mor- phology Using Multitape Finite Automata: A Case Study on Syriac and Arabic. Computational Linguis- tics, 26(1):77-105, March.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A general computational model for word-form recognition and production",
"authors": [
{
"first": "Kimmo",
"middle": [],
"last": "Koskenniemi",
"suffix": ""
}
],
"year": 1984,
"venue": "Proceedings of the 10th international conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "178--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimmo Koskenniemi. 1984. A general computational model for word-form recognition and production. In Proceedings of the 10th international conference on Computational Linguistics, pages 178-181. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Overview and Results of Morpho Challenge",
"authors": [
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ville",
"suffix": ""
},
{
"first": "Graeme",
"middle": [
"W"
],
"last": "Turunen",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Blackwood",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2009,
"venue": "Multilingual Information Access Evaluation I. Text Retrieval Experiments",
"volume": "6241",
"issue": "",
"pages": "578--597",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikko Kurimo, Sami Virpioja, Ville T. Turunen, Graeme W. Blackwood, and William Byrne. 2010. Overview and Results of Morpho Challenge 2009. In Multilingual Information Access Evaluation I. Text Re- trieval Experiments, volume 6241 of Lecture Notes in Computer Science, pages 578-597. Springer Berlin / Heidelberg.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Modeling syntactic context improves morphological segmentation",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Yoong Keok Lee",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoong Keok Lee, Aria Haghighi, and Regina Barzilay. 2011. Modeling syntactic context improves morpho- logical segmentation. In Proceedings of CoNLL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Direct Parsing of Discontinuous Constituents in German",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Maier",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL-HLT Workshop on Statistical Parsing of Morphologically-Rich Languages",
"volume": "",
"issue": "",
"pages": "58--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Maier. 2010. Direct Parsing of Discon- tinuous Constituents in German. In Proceedings of the NAACL-HLT Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 58-66. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The Two-Parameter Poisson-Dirichlet Distribution Derived from a Stable Subordinator",
"authors": [
{
"first": "Jim",
"middle": [],
"last": "Pitman",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Yor",
"suffix": ""
}
],
"year": 1997,
"venue": "The Annals of Probability",
"volume": "25",
"issue": "2",
"pages": "855--900",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jim Pitman and Marc Yor. 1997. The Two-Parameter Poisson-Dirichlet Distribution Derived from a Stable Subordinator. The Annals of Probability, 25(2):855- 900.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Exchangeable and partially exchangeable random partitions",
"authors": [
{
"first": "Jim",
"middle": [],
"last": "Pitman",
"suffix": ""
}
],
"year": 1995,
"venue": "Probability Theory and Related Fields",
"volume": "102",
"issue": "",
"pages": "145--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jim Pitman. 1995. Exchangeable and partially exchange- able random partitions. Probability Theory and Re- lated Fields, 102:145-158.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "External Evidence and the Semitic Root",
"authors": [
{
"first": "Jean-Fran\u00e7ois",
"middle": [],
"last": "Prunet",
"suffix": ""
}
],
"year": 2006,
"venue": "Morphology",
"volume": "16",
"issue": "1",
"pages": "41--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-Fran\u00e7ois Prunet. 2006. External Evidence and the Semitic Root. Morphology, 16(1):41-67.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Learning Arabic Morphology Using Statistical Constraint-Satisfaction Models",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Rodrigues",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Damir\u0107avar",
"suffix": ""
}
],
"year": 2007,
"venue": "Perspectives on Arabic Linguistics: Proceedings of the 19th Arabic Linguistics Symposium",
"volume": "",
"issue": "",
"pages": "63--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Rodrigues and Damir\u0106avar. 2007. Learning Arabic Morphology Using Statistical Constraint-Satisfaction Models. In Elabbas Benmamoun, editor, Perspectives on Arabic Linguistics: Proceedings of the 19th Ara- bic Linguistics Symposium, pages 63-75, Urbana, IL, USA. John Benjamins Publishing Company.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Computational Cognitive Morphosemantics: Modeling Morphological Compositionality in Hebrew Verbs with Embodied Construction Grammar",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Annual Meeting of the Berkeley Linguistics Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Schneider. 2010. Computational Cognitive Morphosemantics: Modeling Morphological Compo- sitionality in Hebrew Verbs with Embodied Construc- tion Grammar. In Proceedings of the Annual Meeting of the Berkeley Linguistics Society, Berkeley, CA.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "On multiple context-free grammars",
"authors": [
{
"first": "Hiroyuki",
"middle": [],
"last": "Seki",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Matsumura",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Fujii",
"suffix": ""
},
{
"first": "Tadao",
"middle": [],
"last": "Kasami",
"suffix": ""
}
],
"year": 1991,
"venue": "Theoretical Computer Science",
"volume": "88",
"issue": "2",
"pages": "191--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroyuki Seki, Takashi Matsumura, Mamoru Fujii, and Tadao Kasami. 1991. On multiple context-free gram- mars. Theoretical Computer Science, 88(2):191-229.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Minimally-Supervised Morphological Segmentation using Adaptor Grammars",
"authors": [
{
"first": "Kairit",
"middle": [],
"last": "Sirts",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kairit Sirts and Sharon Goldwater. 2013. Minimally- Supervised Morphological Segmentation using Adap- tor Grammars. Transactions of the ACL.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Characterizing structural descriptions produced by various grammatical formalisms",
"authors": [
{
"first": "K",
"middle": [],
"last": "Vijay-Shanker",
"suffix": ""
},
{
"first": "David",
"middle": [
"J"
],
"last": "Weir",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1987,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Vijay-Shanker, David J. Weir, and Aravind K. Joshi. 1987. Characterizing structural descriptions produced by various grammatical formalisms. In Proceedings of ACL, pages 104-111.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Example derivation for wakitabi (and my book) using the SRCG fragment from \u00a73.2. CFGs cannot capture such crossing branches.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "as adaptor function. A Pitman-Yor Simple Range Concatenating Adaptor Grammar (PYSRCAG) is a tuple G = (G S , M, a, b, \u03b1), where G S is a probabilistic SRCG as defined before and M \u2286 N is a set of adapted non-terminals. The vectors a and b, indexed by the elements of M , are the discount and concentration parameters for each adapted nonterminal, with a \u2208 [0, 1], b \u2265 0. \u03b1 are parameters to Dirichlet priors on the rule probabilities \u03b8.",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "ROC curves for predicting the stem and root lexicons for the HEB dataset. The area under each curve (AUC), as computed with the trapezium rule, is given in parentheses.",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "l) ... Stem ... Suf(k m)",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td/><td colspan=\"3\">Vocalised Arabic (BW )</td><td colspan=\"2\">Unvocalised Arabic (BW)</td><td>Hebrew (HEB)</td></tr><tr><td/><td colspan=\"2\">Pre Stem Suf</td><td>R3</td><td>Pre Stem Suf</td><td>R3</td><td>Pre Stem Suf</td><td>R3</td></tr><tr><td colspan=\"3\">Concat 15.0 20.2 25.4</td><td colspan=\"2\">-32.8 44.1 40.3</td><td colspan=\"2\">-18.7 20.9 29.2</td><td>-</td></tr><tr><td>Tpl</td><td colspan=\"4\">24.7 39.4 35.2 \u2020 42.4 45.9 54.7 47.9</td><td colspan=\"2\">62.7 35.1 59.6 52.9 34.8</td></tr><tr><td colspan=\"3\">Tpl3Ch 28.4 36.0 36.5</td><td colspan=\"2\">5.2 50.3 55.1 48.5</td><td colspan=\"2\">62.4 38.6 61.5 56.6</td><td>7.1</td></tr><tr><td colspan=\"3\">Tpl+T4 29.0 44.8 41.0</td><td colspan=\"2\">3.9 46.2 54.2 47.7</td><td colspan=\"2\">62.3 32.5 59.6 53.0 36.4</td></tr><tr><td>TplR4</td><td colspan=\"2\">37.8 60.3 47.0</td><td colspan=\"2\">5.2 53.0 57.7 51.9</td><td colspan=\"2\">62.4 38.0 62.4 55.2 34.7</td></tr><tr><td colspan=\"2\">Table 2: BW</td><td>BW</td><td>QU</td><td>HEB</td><td/></tr><tr><td colspan=\"5\">Morfessor 55.57 40.04 44.34 24.20</td><td/></tr><tr><td>Concat</td><td colspan=\"4\">47.36 64.22 19.64 60.05</td><td/></tr><tr><td>Tpl</td><td colspan=\"4\">60.42 71.91 22.53 77.26</td><td/></tr><tr><td>Tpl3Ch</td><td colspan=\"4\">60.52 72.20 25.72 77.41</td><td/></tr><tr><td>Tpl+T4</td><td colspan=\"4\">64.49 71.59 24.81 77.14</td><td/></tr><tr><td>TplR4</td><td colspan=\"2\">74.54 73.66</td><td>-</td><td>78.14</td><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "Morpheme lexicon induction quality. F1-scores for lexicons induced from the most probable parse of each different dataset under each models. \u2020 42.4 was obtained by taking the union of R3 and T3 items to match the way the model used them (see \u00a76.4).",
"html": null
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Segmentation quality in SBF1. The QU results are for the corresponding M* models .",
"html": null
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "1. spr G\u0161apa\u0159.ti te.\u0161ap\u0159 ye.\u0161ap\u0159.u B sipur.im hi x\u0161 tap x a\u0159 t 2. lbs G\u013eaba\u0160.t li.\u013ebo\u0160 ti.\u013ebe\u0160.i B le ha x\u013eb i\u0160 ti t x\u013e ab\u0160.i 3. ptx Gpa\u0165ax.ti ti.p\u0165ex.i B li.p\u0165oax ni xp\u0165 ax.at 5. !al \u00d7 B ya.!al.uma x! a\u013e.a!a\u010d\u013e x an it",
"html": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Top Hebrew roots hypothesised by Tpl+T4.",
"html": null
}
}
}
}