|
{ |
|
"paper_id": "D07-1022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:18:47.455192Z" |
|
}, |
|
"title": "Joint Morphological and Syntactic Disambiguation *", |
|
"authors": [ |
|
{ |
|
"first": "Shay", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Cohen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University Pittsburgh", |
|
"location": { |
|
"postCode": "15213", |
|
"region": "PA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University Pittsburgh", |
|
"location": { |
|
"postCode": "15213", |
|
"region": "PA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Hwa", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In morphologically rich languages, should morphological and syntactic disambiguation be treated sequentially or as a single problem? We describe several efficient, probabilisticallyinterpretable ways to apply joint inference to morphological and syntactic disambiguation using lattice parsing. Joint inference is shown to compare favorably to pipeline parsing methods across a variety of component models. State-of-the-art performance on Hebrew Treebank parsing is demonstrated using the new method. The benefits of joint inference are modest with the current component models, but appear to increase as components themselves improve.", |
|
"pdf_parse": { |
|
"paper_id": "D07-1022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In morphologically rich languages, should morphological and syntactic disambiguation be treated sequentially or as a single problem? We describe several efficient, probabilisticallyinterpretable ways to apply joint inference to morphological and syntactic disambiguation using lattice parsing. Joint inference is shown to compare favorably to pipeline parsing methods across a variety of component models. State-of-the-art performance on Hebrew Treebank parsing is demonstrated using the new method. The benefits of joint inference are modest with the current component models, but appear to increase as components themselves improve.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "As the field of statistical NLP expands to handle more languages and domains, models appropriate for standard benchmark tasks do not always work well in new situations. Take, for example, parsing the Wall Street Journal Penn Treebank, a longstanding task for which highly accurate context-free models stabilized by the year 2000 (Collins, 1999; Charniak, 2000) . On this task, the Collins model achieves 90% F 1 -accuracy. Extended for new languages by Bikel (2004) , it achieves only 75% on Arabic and 72% on Hebrew. 1 It should come as no surprise that Semitic parsing lags behind English. The Collins model was carefully designed and tuned for WSJ English. Many of the features in the model depend on English syntax or Penn Treebank annotation conventions. Inherent in its crafting is the assumption that a million words of training text are available. Finally, for English, it need not handle morphological ambiguity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 329, |
|
"end": 344, |
|
"text": "(Collins, 1999;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 345, |
|
"end": 360, |
|
"text": "Charniak, 2000)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 465, |
|
"text": "Bikel (2004)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 518, |
|
"end": 519, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Indeed, the figures cited above for Arabic and Hebrew are achieved using gold-standard morphological disambiguation and part-of-speech tagging.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Given only surface words, Arabic performance drops by 1.5 F 1 points. The Hebrew Treebank (unlike Arabic) is built over morphemes, a convention we view as sensible, though it complicates parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper considers parsing for morphologically rich languages, with Hebrew as a test case. Morphology and syntax are two levels of linguistic description that interact. This interaction, we argue, can affect disambiguation, so we explore here the matter of joint disambiguation. This involves the comparison of a pipeline (where morphology is inferred first and syntactic parsing follows) with joint inference. We present a generalization of the two, and show new ways to do joint inference for this task that does not involve a computational blow-up.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The paper is organized as follows. \u00a72 describes the state of the art in NLP for Hebrew and some phenomena it exhibits that motivate joint inference for morphology and syntax. \u00a73 describes our approach to joint inference using lattice parsing, and gives three variants of weighted lattice parsing with their probabilistic interpretations. The different factor models and their stand-alone performance are given in \u00a74. \u00a75 presents experiments on Hebrew parsing and explores the benefits of joint inference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section we discuss prior work on statistical morphological and syntactic processing of Hebrew and motivate the joint approach. Wintner (2004) reviews work in Hebrew NLP, emphasizing that the challenges stem from the writing system, rich morphology, unique word formation process of roots and patterns, and relative lack of annotated corpora.", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 149, |
|
"text": "Wintner (2004)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We know of no publicly available statistical parser designed specifically for Hebrew. Sima'an et al.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NLP for Modern Hebrew", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "h.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NLP for Modern Hebrew", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Figure 1: (a.) A sentence in Hebrew (to be read right to left), with (b.) one morphological analysis, (c.) English glosses, and (d.) natural translation; and (e.) a different morphological analysis, (f.) English glosses, and (g.) less natural translation. (h.) shows a morphological \"sausage\" lattice that encodes the morpheme-sequence analyses L( x) possible for a shortened sentence (unmodified \"meadow\"). Shaded states are word boundaries, white states are intra-word morpheme boundaries; in practice we add POS tags to the arcs, to permit disambiguation. According to both native speakers we polled, both interpretations are grammatical-note the long-distance agreement required for grammaticality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NLP for Modern Hebrew", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(2001) built a Hebrew Treebank of 88,747 words (4,783 sentences) and parsed it using a probabilistic model. However, they assumed that the input to the parser was already (perfectly) morphologically disambiguated. This assumption is very common in multilingual parsing (see, for example, Cowan et al., 2005, and Buchholz et al., 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 288, |
|
"end": 311, |
|
"text": "Cowan et al., 2005, and", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 312, |
|
"end": 334, |
|
"text": "Buchholz et al., 2006)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NLP for Modern Hebrew", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Until recently, the NLP literature on morphological processing was dominated by the largely non-probabilistic application of finite-state transducers (Kaplan and Kay, 1981; Koskenniemi, 1983; Beesley and Karttunen, 2003) and the largely unsupervised discovery of morphological patterns in text (Goldsmith, 2001; Wicentowski, 2002) ; Hebrew morphology receives special attention in Levinger et al. (1995) , Daya et al. (2004) , and Adler and Elhadad (2006) . Lately a few supervised disambiguation methods have come about, including hidden Markov models (Hakkani-T\u00fcr et al., 2000; Haji\u010d et al., 2001) , conditional random fields (Kudo et al., 2004; Smith et al., 2005b) , and local support vector machines (Habash and Rambow, 2005) . There are also morphological disambiguators designed specifically for Hebrew (Segal, 2000; Bar-Haim et al., 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 172, |
|
"text": "(Kaplan and Kay, 1981;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 191, |
|
"text": "Koskenniemi, 1983;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 192, |
|
"end": 220, |
|
"text": "Beesley and Karttunen, 2003)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 311, |
|
"text": "(Goldsmith, 2001;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 312, |
|
"end": 330, |
|
"text": "Wicentowski, 2002)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 381, |
|
"end": 403, |
|
"text": "Levinger et al. (1995)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 424, |
|
"text": "Daya et al. (2004)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 431, |
|
"end": 455, |
|
"text": "Adler and Elhadad (2006)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 553, |
|
"end": 579, |
|
"text": "(Hakkani-T\u00fcr et al., 2000;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 580, |
|
"end": 599, |
|
"text": "Haji\u010d et al., 2001)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 628, |
|
"end": 647, |
|
"text": "(Kudo et al., 2004;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 648, |
|
"end": 668, |
|
"text": "Smith et al., 2005b)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 705, |
|
"end": 730, |
|
"text": "(Habash and Rambow, 2005)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 810, |
|
"end": 823, |
|
"text": "(Segal, 2000;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 824, |
|
"end": 846, |
|
"text": "Bar-Haim et al., 2005)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NLP for Modern Hebrew", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In NLP, the separation of syntax and morphology is understandable when the latter is impoverished, as in English. When both involve high levels of ambiguity, this separation becomes harder to justify, as argued by Tsarfaty (2006) . To our knowledge, that is the only study to move toward joint inference of syntax and morphology, presenting joint models and testing approximation of these models with two parsers: one a pipeline (segmentation \u2192 tagging \u2192 parsing), the other involved joint inference of segmentation and tagging, with the result piped to the parser. The latter was slightly more accurate. Tsarfaty discussed but did not carry out joint inference.", |
|
"cite_spans": [ |
|
{ |
|
"start": 214, |
|
"end": 229, |
|
"text": "Tsarfaty (2006)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Why Joint Inference?", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In a morphologically rich language, the different morphemes that make up a word can play a variety of different syntactic roles. A reasonable linguistic analysis might not make such morphemes immediate sisters in the tree. Indeed, the convention of the Hebrew Treebank is to place morphemes (rather than words) at the leaves of the parse tree, allowing morphemes of a word to attach to different nonterminal parents. 2 Generating parse trees over morphemes requires the availability of morphological information when parsing. Because this analysis is not in general reducible to sequence labeling (tagging), the problem is different from POS tagging. Figure 1 gives an example from Hebrew that illustrates the interaction between morphology and syntax. In this example, we show two interpretations of the surface text, with the first being a more common natural analysis for the sentence. The first and third-to-last words' analyses depend on each other if the resulting analysis is to be the more natural one: for this analysis the first seven words have to be a noun phrase, while for the less common analysis (\"lying there nicely\") only the first six words compose a noun phrase, with the last two words composing a verb phrase. Consistency depends on a long-distance dependency that a finite-state morphology model cannot capture, but a model that involves syntactic information can. Disambiguating the syntax aids in disambiguating the morphology, suggesting that a joint model will perform both more accurately.", |
|
"cite_spans": [ |
|
{ |
|
"start": 417, |
|
"end": 418, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 651, |
|
"end": 659, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Why Joint Inference?", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In sum, joint inference of morphology and syntax is expected to allow decisions of both kinds to influence each other, enforce adherence to constraints at both levels, and to diminish the propagation of errors inherent in pipelines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Why Joint Inference?", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We now formalize the problem and supply the necessary framework for performing joint morphological disambiguation and syntactic parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference of Morphology and Syntax", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Let X be the language's word vocabulary and M be its morpheme inventory. The set of valid analyses for a surface word is defined using a morphologi-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Notation and Morphological Sausages", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "cal lexicon L, which defines L(x) \u2286 M + . L( x) \u2286 (M + ) + (sequence of sequences)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Notation and Morphological Sausages", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "is the set of wholesentence analyses for sentence x = x 1 , x 2 , ..., x n , produced by concatenating elements of L(x i ) in order. L( x) can be represented as an acyclic lattice with a \"sausage\" shape familiar from speech recognition (Mangu et al., 1999) and machine translation (Lavie et al., 2004) . Fig. 1h shows a sausage lattice for a sentence in Hebrew. We use m to denote an element of L( x) and m i to denote an element of L(x i ); in general, m = m 1 , m 2 , ..., m n . We are interested in a function f :", |
|
"cite_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 256, |
|
"text": "(Mangu et al., 1999)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 301, |
|
"text": "(Lavie et al., 2004)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 304, |
|
"end": 311, |
|
"text": "Fig. 1h", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Notation and Morphological Sausages", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "X + \u2192 (M + ) + \u00d7 T,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Notation and Morphological Sausages", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where T is the set of syntactic trees for the language. f can be viewed as a structured classifier. We use D G ( m) \u2286 T to denote the set of valid trees under a grammar G (here, a PCFG with terminal alphabet M) for morpheme sequence m. To be precise, f ( x) selects a mutually consistent morphological and syntactic analysis from", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Notation and Morphological Sausages", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "GEN( x) = { m, \u03c4 | m \u2208 L( x), \u03c4 \u2208 D G ( m)}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Notation and Morphological Sausages", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our mapping f ( x) is based on a joint probability model p(\u03c4, m | x) which combines two probability models p G (\u03c4, m) (a PCFG built on the grammar G) and p L ( m | x) (a morphological disambiguation model built on the lexicon L). Factoring the joint model into sub-models simplifies training, since we can train each model separately, and inference (parsing), as we will see later in this section. Factored estimation has been quite popular in NLP of late (Klein and Manning, 2003b; Smith and Smith, 2004; Smith et al., 2005a, inter alia) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 456, |
|
"end": 482, |
|
"text": "(Klein and Manning, 2003b;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 483, |
|
"end": 505, |
|
"text": "Smith and Smith, 2004;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 506, |
|
"end": 538, |
|
"text": "Smith et al., 2005a, inter alia)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Product of Experts", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The most obvious joint parser uses p G as a conditional model over trees given morphemes and maximizes the joint likelihood:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Product of Experts", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "f lik ( x) = argmax m,\u03c4 \u2208GEN( x) p G (\u03c4 | m) \u2022 p L ( m | x) (1) = argmax m,\u03c4 \u2208GEN( x) p G (\u03c4, m) \u03c4 p G (\u03c4 , m) \u2022 p L ( m, x) m p L ( m , x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Product of Experts", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "This is not straightforward, because it involves summing up the trees for each m to compute p G ( m), which calls for the O(| m| 3 )-Inside algorithm to be called on each m. Instead, we use the joint, p G (\u03c4, m), which, strictly speaking, makes the model deficient (\"leaky\"), but permits a dynamic programming solution. Our models will be parametrized using either unnormalized weights (a log-linear model) or multinomial distributions. Either way, both models define scores over parts of analyses, and it may be advantageous to give one model relatively greater strength, especially since we often ignore constant normalizing factors. This is known as a product of experts (Hinton, 1999) , where a new combined distribution over events is defined by multiplying component distributions together and renormalizing. In the present setting, for some value \u03b1 \u2265 0,", |
|
"cite_spans": [ |
|
{ |
|
"start": 674, |
|
"end": 688, |
|
"text": "(Hinton, 1999)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Product of Experts", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "f poe,\u03b1 ( x) = argmax m,\u03c4 \u2208GEN( x) p G (\u03c4, m) \u2022 p L ( m | x) \u03b1 Z( x, \u03b1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Product of Experts", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(2) where Z( x, \u03b1) need not be computed (since it is a constant in m and \u03c4 ). \u03b1 tunes the relative weight of the morphology model with respect to the parsing model. The higher \u03b1 is, the more we trust the morphology model over the parser to correctly disambiguate the sentence. We might trust one model more than the other for a variety of reasons: it could be more robustly or discriminatively estimated, or it could be known to come from a more appropriate family.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Product of Experts", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "This formulation also generalizes two more na\u00efve parsing methods. If \u03b1 = 0, the morphology is modeled only through the PCFG and p L is ignored except as a constraint on which analyses L( x) are allowed (i.e., on the definition of the set GEN( x)). At the other extreme, as \u03b1 \u2192 +\u221e, p L becomes more important. Because p L does not predict trees, p G still \"gets to choose\" the syntax tree, but in the limit it must find a tree for", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Product of Experts", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "argmax m\u2208L( x) p L ( m | x).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Product of Experts", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "This is effectively the morphology-first pipeline. 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Product of Experts", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To parse, we apply a dynamic programming algorithm in the max, + semiring to solve the f poe,\u03b1 problem shown in Eq. 4. If p L is a unigram-factored model, such that for some single-word morphological model \u03c5 we have", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p L ( m | x) = n i=1 \u03c5( m i | x i )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "then we can implement morpho-syntactic parsing by weighting the sausage lattice. Let the weight of each arc that starts an analysis m i \u2208 L(x i ) be equal to log \u03c5( m i | x i ), and let other arcs have weight 0. In the parsing algorithm, the weight on an arc is summed in when the arc is first used to build a constituent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In general, we would like to define a joint model that assigns (unnormalized) probabilities to elements of GEN( x). If p G is a PCFG and p L can be described as a weighted finite-state transducer, then this joint model is their weighted composition, which is a weighted CFG; call the composed grammar I and its (unnormalized) distribution p I . Compared to G, I will have many more nonterminals if p L has a Markov order greater than 0 (unigram, as above). Because parsing runtime depends heavily on the grammar constant (at best, quadratic in the number of nonterminals), parsing with p I is not computationally attractive. 4 f poe,\u03b1 is not, then, a scalable solution when we wish to use a morphology model p L that can make interdependent decisions about different words in x in context. We propose two new, efficient dynamic programming solutions for joint parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In the first, we approximate the distribution p L ( M | x) using a unigram-factored model of the form in Eq. 3:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "p L ( m | x) = n i=1 p L ( M i = m i | x) posterior, depends on all of x (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Similar methods were applied by Matsuzaki et al. (2005) and Petrov and Klein (2007) for parsing under a PCFG with nonterminals with latent annotations. Their approach was variational, approximating the true posterior over coarse parses using a sentence-specific PCFG on the coarse nonterminals, created directly out of the true fine-grained PCFG. In our case, we approximate the full distribution over morphological analyses for the sentence by a simpler, sentence-specific unigram model that assumes each word's analysis is to be chosen independently of the others. Note that our model (p L ) does not make such an assumption, only the approximate model p L does, and the approximation is per-sentence. The idea resembles a mean-field variational approximation for graphical models. Turning to implementation, we can solve for p L ( m i | x) exactly using the forward-backward algorithm. We will call this method f vari,\u03b1 (see Eq. 5).", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 55, |
|
"text": "Matsuzaki et al. (2005)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 60, |
|
"end": 83, |
|
"text": "Petrov and Klein (2007)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "A closely related method, applied by Goodman (1996) is called minimum-risk decoding. Goodman called it \"maximum expected recall\" when applying it to parsing. In the HMM community it", |
|
"cite_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 51, |
|
"text": "Goodman (1996)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "f poe,\u03b1 ( x) = argmax m,\u03c4 \u2208GEN( x) log p G (\u03c4, m) + \u03b1 log p L ( m | x) (4) f vari,\u03b1 ( x) = argmax m,\u03c4 \u2208GEN( x) log p G (\u03c4, m) + \u03b1 n i=1 log p L ( m i | x) (5) f risk,\u03b1 ( x) = argmax m,\u03c4 \u2208GEN( x) log p G (\u03c4, m) + \u03b1 n i=1 p L ( m i | x)", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "is sometimes called \"posterior decoding.\" Minimum risk decoding is attributable to Goel and Byrne (2000) . Applied to a single model, it factors the parsing decision by penalizable errors, and chooses the solution that minimizes the risk (expected number of errors under the model). This factors into a sum of expectations, one per potential mistake. This method is expensive for parsing models (since it requires the Inside algorithm to compute expected recall mistakes), but entirely reasonable for sequence labeling models. The idea is to score each wordanalysis m i in the morphological lattice by the ex-", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 104, |
|
"text": "Goel and Byrne (2000)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "pected value (under p L ) that m i is present in the fi- nal analysis m. This is, of course p L ( M i = m i | x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": ", the same quantity computed for f vari,\u03b1 , except the score of a path in the lattice is now a sum of posteriors rather than a product. Our second approximate joint parser tries to maximize the probability of the parse (as before) and at the same time to minimize the risk of the morphological analysis. See f risk,\u03b1 in Eq. 6; the only difference between f risk,\u03b1 and f vari,\u03b1 is whether posteriors are added (f risk,\u03b1 ) or multiplied (f vari,\u03b1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "To summarize this section, f vari,\u03b1 and f risk,\u03b1 are two approximations to the expensive-in-general f poe,\u03b1 that boil down to parsing over weighted lattices. The only difference between them is how the lattice is weighted:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "using \u03b1 log p L ( m i | x) for f vari,\u03b1 or using \u03b1p L ( m i | x) for f risk,\u03b1 . 5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In case of a unigram p L , f poe,\u03b1 is equivalent to f vari,\u03b1 ; otherwise f poe,\u03b1 is likely to be too expensive.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Algorithms", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "To parse the weighted lattices using f vari,\u03b1 and f risk,\u03b1 in the previous section, we use lattice parsing. Lattice parsing is a straightforward generalization of 5 Until now, we have talked about weighting word analyses, which may cover several arcs, rather than arcs. In practice we apply the weight to the first arc of a word analysis, and weight the remaining arcs of that analysis with 0 (no cost or benefit), giving the desired effect. string parsing that indexes constituents by states in the lattice rather than word interstices. At parsing time, a max, + lattice parser finds the best combined parse tree and path through the lattice. Importantly, the data structures that are used in chart parsing need not change in order to accommodate lattices. The generalization over classic Earley or CKY parsing is simple: keep in the parsing chart constituents created over a pair of start state and end state (instead of start position and end position), and (if desired) factor in weights on lattice arcs; see Hall (2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 164, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1013, |
|
"end": 1024, |
|
"text": "Hall (2005)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lattice Parsing", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "A fair comparison of joint and pipeline parsing must make some attempt to control for the component models. We describe here two PCFGs we used for p G (\u03c4, m) and two finite-state morphological models we used for p L ( m | x). We show how these models perform in stand-alone evaluations. For all experiments, we used the Hebrew Treebank (Sima'an et al., 2001 ). After removing traces and removing functional information from the nonterminals, we had 3,770 sentences in the training set, 371 sentences in the development set (used primarily to select the value of \u03b1) and 370 sentences in the test set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 357, |
|
"text": "(Sima'an et al., 2001", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Factored Models", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our first syntax model is an unbinarized PCFG trained using relative frequencies. Preterminal (POS tag \u2192 morpheme) rules are smoothed using backoff to a model that predicts the morpheme length and letter sequence. The PCFG is not binarized. This grammar is remarkably good, given the limited effort that went into it. The rules in the training set had high coverage with respect to the development set: an oracle experiment in which we maximized the number of recovered gold-standard constituents (on the development set) gave F 1 accuracy of 93.7%. In fact, its accuracy supersedes more complex, lexicalized, models: given goldstandard morphology, it achieves 81.2% (compared to 72.0% by Bikel's parser, with head rules specified by a native speaker). This is probably attributable to the dataset's size, which makes training with highly-parameterized lexicalized models precarious and prone to overfitting. With first-order vertical markovization (i.e., annotating each nonterminal with its parent as in Johnson, 1998) , accuracy is also at 81.2%. Tuning the horizontal markovization of the grammar rules (Klein and Manning, 2003a ) had a small, adverse effect on this dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1006, |
|
"end": 1020, |
|
"text": "Johnson, 1998)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1107, |
|
"end": 1132, |
|
"text": "(Klein and Manning, 2003a", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntax Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Since the PCFG model was relatively successful compared to lexicalized models, and is faster to run, we decided to use a vanilla PCFG, denoted G van , and a parent-annotated version of that PCFG (Johnson, 1998), denoted G v=2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntax Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Both of our morphology models use the same morphological lexicon L, which we describe first.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphology Model", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this work, a morphological analysis of a word is a sequence of morphemes, possibly with a tag for each morpheme. There are several available analyzers for Hebrew, including Yona and Wintner (2005) and Segal (2000) . We use instead an empiricallyconstructed generative lexicon that has the advantage of matching the Treebank data and conventions. If the Treebank is enriched, this would then directly benefit the lexicon and our models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 199, |
|
"text": "Yona and Wintner (2005)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 216, |
|
"text": "Segal (2000)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphological Lexicon", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Starting with the training data from the Hebrew Treebank, we first create a set of prefixes M p \u2282 M; this set includes any morpheme seen in a non-final position within any word. We also create a set of stems M s \u2282 M that includes any morpheme seen in a final position in a word. This effectively captures the morphological analysis convention in the Hebrew Treebank, where a stem is prefixed by a relatively dominant low-entropy sequence of 0-5 prefix morphemes. For example, MHKLB (\"from the dog\") is analyzed as M+H+KLB with prefixes M (\"from\") and H (\"the\") and KLB (\"dog\") is the stem. In practice, |M p | = 124 (including some conventions for numerals) and |M s | = 13,588. The morphological lexicon is then defined as any analysis given M p and M s :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphological Lexicon", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "L(x) = {m k 1 \u2208 M * p \u00d7 M s | concat(m k 1 ) = x)} \u222a{m k 1 | count(m k 1 , x) \u2265 1} (9)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphological Lexicon", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "where m k 1 denotes m 1 , ..., m k and count(m k 1 , x) denotes the number of occurrences of x disambiguated as m k 1 in the training set. Note that L(x) also includes any analysis of x observed in the training data. This permits the memorization of any observed analysis that is more involved than simple segmentation (4% of word tokens in the training set; e.g., LXDR (\"to the room\") is analyzed as L+H+XDR). This will have an effect on evaluation (see \u00a75.1). On the development data, L has 98.6% coverage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphological Lexicon", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "The baseline morphology model, p uni L , first defines a joint distribution following Eq. 8. The word model factors out when we conditionalize to form", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unigram Baseline", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "p uni L ( m 1 , ..., m k | x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unigram Baseline", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": ". The prefix sequence model is multinomial estimated by MLE. The stem model (conditioned on the prefix sequence) is smoothed to permit any stem that is a sequence of Hebrew characters. On the development data, p uni L is 88.8% accurate (by word).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unigram Baseline", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "The second morphology model, p crf L , which is based on the same morphological lexicon L, uses a second-order conditional random field (Lafferty et al., 2001) to disambiguate the full sentence by modeling local contexts (Kudo et al., 2004; Smith et al., 2005b) . Space does not permit a full description; the model uses all the features of Smith et al. (2005b) except the \"lemma\" portion of the model, since the Hebrew Treebank does not provide lemmas. The weights are trained to maximize the probability of the correct path through the morphological lattice, conditioned on the lattice. This is therefore a discriminative model that defines p L ( m | x) directly, though we ignore the normalization factor in parsing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 159, |
|
"text": "(Lafferty et al., 2001)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 221, |
|
"end": 240, |
|
"text": "(Kudo et al., 2004;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 261, |
|
"text": "Smith et al., 2005b)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 361, |
|
"text": "Smith et al. (2005b)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditional Random Field", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "Until now we have described p L as a model of morphemes, but this CRF is trained to predict POS tags as well-we can either use the tags (i.e., label the morphological lattice with tag/morpheme pairs,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditional Random Field", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "p uni L ( m 1 , m 2 , ..., m k , x) = p(x | m 1 , m 2 , ..., m k ) word \u2022 p(m k | m 1 , ..., m k\u22121 ) stem \u2022 p( m 1 , ..., m k\u22121 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditional Random Field", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "prefix sequence (8) so that the lattice parser finds a parse that is consistent under both models), or sum the tags out and let the parser do the tagging. One subtlety is the tagging of words not seen in the training data; for such words an unsegmented hypothesis with tag UN-KNOWN is included in the lattice and may therefore be selected by the CRF. On the development data, p crf L is 89.8% accurate on morphology, with 74.9% fine-grained POS-tagging F 1 -accuracy (see \u00a75.1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 19, |
|
"text": "(8)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditional Random Field", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "Note on generative and discriminative models. The reader may be skeptical of our choice to combine a generative PCFG with a discrimative CRF. We point out that both are used to define conditional distributions over desired \"output\" structures given \"input\" sequences. Notwithstanding the fact that the factors can be estimated in very different ways, our combination in an exact or approximate product-ofexperts is a reasonable and principled approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditional Random Field", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "In this section we evaluate parsing performance, but an evaluation issue is resolved first.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The \"Parseval\" measures (Black et al., 1991) are used to evaluate a parser's phrase-structure trees against a gold standard. They compute precision and recall of constituents, each indexed by a label and two endpoints. As pointed out by Tsarfaty (2006) , joint parsing of morphology and syntax renders this indexing inappropriate, since it assumes the yields of the trees are identical-that assumption is violated if there are any errors in the hypothesized m. Tsarfaty (2006) instead indexed by non-whitespace character positions, to deal with segmentation mismatches. In general (and in this work) that is still insufficient, since L( x) may include m that are not simply segmentations of x (see \u00a74.2.1). Roark et al. (2006) propose an evaluation metric for comparing a parse tree over a sentence generated by a speech recognizer to a gold-standard parse. As in our case, the hypothesized tree could have a different yield than the original gold-standard parse tree, because of errors made by the speech recognizer. The metric is based on an alignment between the hypothesized sentence and the goldstandard sentence. We used a similar evaluation metric, which takes into account the information about parallel word boundaries as well, a piece of information that does not appear naturally in speech recognition. Given the correct m * and the hypothe-sis\u02c6 m, we use dynamic programming to find an optimal many-to-many monotonic alignment between the atomic morphemes in the two sequences. The algorithm penalizes each violation (by a morpheme) of a one-to-one correspondence, 6 and each character edit required to transform one side of a correspondence into the other (without whitespace). Word boundaries are (here) known and included as index positions. In the case where\u02c6 m = m * (or equal up to whitespace) the method is identical to Parseval (and also to Tsarfaty, 2006) . POS tag accuracy is evaluated the same way, for the same reasons; we report F 1 -accuracy for tagging and parsing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 44, |
|
"text": "(Black et al., 1991)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 237, |
|
"end": 252, |
|
"text": "Tsarfaty (2006)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 461, |
|
"end": 476, |
|
"text": "Tsarfaty (2006)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 707, |
|
"end": 726, |
|
"text": "Roark et al. (2006)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1861, |
|
"end": 1876, |
|
"text": "Tsarfaty, 2006)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Measures", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In our experiment we vary four settings:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 Decoding algorithm: f poe,\u03b1 , f risk,\u03b1 , or f vari,\u03b1 ( \u00a73.3). \u2022 Syntax model: G van or G v=2 ( \u00a74.1). \u2022 Morphology model: p uni L or p crf L ( \u00a74.2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": ". In the latter case, we can use the scores over morpheme sequences only (summing out tags before lattice parsing; denoted m.-p crf L ) or the full model over morphemes and tags, denoted t.-p crf L . 7 \u2022 \u03b1, the relative strength given to the morphology model (see \u00a73). We tested values of \u03b1 in {0, +\u221e} \u222a {10 q | q \u2208 {0, 1, ..., 16}}. Recall that \u03b1 = 0 ignores the morphology model probabilities altogether (using an unweighted lattice), 6 That is, in a correspondence of a morphemes in one string with b in the other, the penalty is a + b \u2212 2, since the morpheme on each side is not in violation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 437, |
|
"end": 438, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "7 One subtlety is that any arc with the UNKNOWN POS tag can be relabeled-to any other tag-by the syntax model, whose preterminal rules are smoothed. This was crucial for \u03b1 = +\u221e (pipeline) parsing with t. and as \u03b1 \u2192 +\u221e a morphology-first pipeline is approached.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We measured four outcome values: segmentation accuracy (fraction of word tokens segmented correctly), fine-and coarse-grained tagging accuracy, 8 and parsing accuracy. For tagging and parsing, F 1measures are given, according to the generalized evaluation measure described in \u00a75.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Tab. 1 compares parsing with tuned \u03b1 values to the pipeline. The best results were achieved using f vari,\u03b1 , using the CRF and joint disambiguation. Without the CRF (using p uni L ), the difference between the decoding algorithms is less apparent, suggesting an interaction between the sophistication of the components and the best way to decode with them. These results suggest that f vari,\u03b1 , which permits p L to \"veto\" any structure involving a morphological analysis for any word that is a posteriori unlikely (note that log p L ( m i | x) can be an arbitrarily large negative number), is beneficial as a \"filter\" on parses. 9 f risk,\u03b1 , on the other hand, is only allowed to give \"bonuses\" of up to \u03b1 to each morphological analysis that p L believes in; its influence is therefore weaker. This result is consistent with the findings of Petrov et al. (2007) for another approximate parsing task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 842, |
|
"end": 862, |
|
"text": "Petrov et al. (2007)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The advantage of the parent-annotated PCFG is also more apparent when the CRF is used for morphology, and when \u03b1 is tuned. All other things equal, then, p crf L led to higher accuracy all around. Letting the CRF help predict the POS tags helped tagging accuracy but not parsing accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "While the gains over the pipeline are modest, the segmentation, fine POS, and parsing accuracy scores achieved by joint disambiguation with f vari,\u03b1 with the CRF are significantly better than any of the pipeline conditions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Interestingly, if we had not tested with the CRF, we might have reached a very different conclusion about the usefulness of tuning \u03b1 as opposed to a pipeline. With the unigram morphology model, joint parsing frequently underperforms the pipeline, sometimes even signficantly. The explanation, we max. length 40) . This table shows the performance of morphological segmentation, part-of-speech tagging, coarse partof-speech tagging and parsing when using an oracle to select the best \u03b1 for each sentence. The notation and interpretation of the numbers are the same as in Tab. 1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 296, |
|
"end": 311, |
|
"text": "max. length 40)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "believe, has to do with the ability of the unigram model to estimate a good distribution over analyses. While the unigram model is nearly as good as the CRF at picking the right segmentation for a word, joint parsing demands much more. In case the best segmentation does not lead to a grammatical morpheme sequence (under the syntax model), the morphology model needs to be able to give relative strengths to the alternatives. The unigram model is less able to do this, because it ignores the context of the word, and so the benefit of joint parsing is lost.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Most commonly the tuned value of \u03b1 is around 10 (not shown, to preserve clarity). Because of ignored normalization constants, this does not mean that morphology is \"10\u00d7 more important than syntax,\" but it does mean that, for a particular p L and p G , tuning their relative importance in decoding can improve accuracy. In Tab. 2 we show how performance would improve if the oracle value of \u03b1 was selected for each test-set sentence; this further highlights the potential impact of perfecting the tradeoff between models. Of course, selecting \u03b1 automatically at test-time, per sentence, is an open problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "To our knowledge, the parsers we have described represent the state-of-the-art in Modern Hebrew parsing. The closest result is Tsarfaty (2006) , which we have not directly replicated. Tsarfaty's model is essentially a pipeline application of f poe,\u221e with a grammar like p Gvan . Her work focused more on the interplay between the segmentation and POS tagging models and the amount of information passed to the parser. Some key differences preclude direct comparison: we modeled fine-grained tags (though we report both kinds of tagging accurcy), we employed a richer morphological lexicon (permitting analyses that are not just segmentation), and a different training/test split and length filter (we used longer sentences). Nonetheless, our conclusions support the argument in Tsarfaty (2006) for more integrated parsing methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 142, |
|
"text": "Tsarfaty (2006)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 778, |
|
"end": 793, |
|
"text": "Tsarfaty (2006)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We conclude that tuning the relative importance of the two models-rather than pipelining to give one infinitely more importance-can provide an improvement on segmentation, tagging, and parsing accuracy. This suggests that future parsing efforts for languages with rich morphology might continue to assume separately-trained (and separatelyimproved) morphology and syntax components, which would stand to gain from joint decoding. In our experiments, better morphological disambiguation was crucial to getting any benefit from joint decoding. Our result also suggests that exploring new, fully-integrated models (and training methods for them) may be advantageous.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The Arabic Treebank, by contrast, annotates words morphologically but keeps the morphemes together as a single node tagged with a POS sequence. In Bikel's Arabic parser, complex POS tags are projected to a small atomic set; it is unclear how much information is lost.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There is a slight difference. If no parse tree exists for the pL-best morphological analysis, then a less probable m may be chosen. So as \u03b1 \u2192 +\u221e, we can view f lik,\u03b1 as finding the best grammatical m and its best tree-not exactly a pipeline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In prior work involving factored syntax modelslexicalized(Klein and Manning, 2003b) and bilingual(Smith and Smith, 2004)-fpoe,1 was applied, and the asymptotic runtime went to O(n 5 ) and O(n 7 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Although the Hebrew Treebank is small, the size of its POS tagset is large (four times larger than the Penn Treebank), because the tags encode morphological features (gender, person, and number). These features have either been ignored in prior work or encoded differently. In order for our POS-tagging figures to be reasonably comparable to previous work, we include accuracy for coarse-grained tags (only the core part of speech) tags as well as the detailed Hebrew Treebank tags.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Another way to describe this combination is to call it a product of | x|+1 experts: one for the morphological analysis of each word, plus the grammar. The morphology experts (softly) veto any analysis that is dubious based on surface criteria, and the grammar (softly) vetoes less-grammatical parses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We showed that joint morpho-syntactic parsing can improve the accuracy of both kinds of disambiguation. Several efficient parsing methods were presented, using factored state-of-the-art morphology and syntax models for the language under consideration. We demonstrated state-of-the-art performance on and consistent improvements across many settings for Modern Hebrew, a morphologically-rich language with a relatively small treebank.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "An unsupervised morpheme-based HMM for Hebrew morphological disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Adler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Elhadad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of COLING-ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Adler and M. Elhadad. 2006. An unsupervised morpheme-based HMM for Hebrew morphological disambiguation. In Proc. of COLING-ACL.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Choosing an optimal architecture for segmentation and POStagging of Modern Hebrew", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bar-Haim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sima'an", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Winter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of ACL Workshop on Computational Approaches to Semitic Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Bar-Haim, K. Sima'an, and Y. Winter. 2005. Choos- ing an optimal architecture for segmentation and POS- tagging of Modern Hebrew. In Proc. of ACL Workshop on Computational Approaches to Semitic Languages.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Finite State Morphology. CSLI", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Beesley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Karttunen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. R. Beesley and L. Karttunen. 2003. Finite State Mor- phology. CSLI.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Multilingual statistical parsing engine", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Bikel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Bikel. 2004. Multilingual statistical pars- ing engine. http://www.cis.upenn.edu/ \u223c dbikel/software.html#stat-parser.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A procedure for quantitatively comparing the syntactic coverage of English grammars", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Black", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Abney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Flickenger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Gdaniec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Harrison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Hindle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Ingria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Klavans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Liberman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Strzalkowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proc. of DARPA Workshop on Speech and Natural Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Black, S. Abney, D. Flickenger, C. Gdaniec, R. Gr- ishman, P Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski. 1991. A procedure for quantitatively comparing the syntactic coverage of English grammars. In Proc. of DARPA Workshop on Speech and Natural Language.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "CoNLL-X shared task on multilingual dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Buchholz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Marsi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Buchholz and E. Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proc. of CoNLL.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A maximum-entropy-inspired parser", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proc. of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Charniak. 2000. A maximum-entropy-inspired parser. In Proc. of NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Morphology and reranking for the statistical parsing of Spanish", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proc. of HLT-EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, U. Penn. B. Cowan and M. Collins. 2005. Morphology and reranking for the statistical parsing of Spanish. In Proc. of HLT-EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Learning Hebrew roots: Machine learning with linguistic constraints", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Daya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Wintner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Daya, D. Roth, and S. Wintner. 2004. Learning Hebrew roots: Machine learning with linguistic con- straints. In Proc. of EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Minimum Bayes risk automatic speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Goel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computer Speech and Language", |
|
"volume": "14", |
|
"issue": "2", |
|
"pages": "115--135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V. Goel and W. Byrne. 2000. Minimum Bayes risk auto- matic speech recognition. Computer Speech and Lan- guage, 14(2):115-135.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Unsupervised learning of the morphology of natural language", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Goldsmith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Comp. Ling", |
|
"volume": "27", |
|
"issue": "2", |
|
"pages": "153--198", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Goldsmith. 2001. Unsupervised learning of the mor- phology of natural language. Comp. Ling., 27(2):153- 198.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Parsing algorithms and metrics", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Goodman. 1996. Parsing algorithms and metrics. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Arabic tokenization, part-of-speech tagging, and morphological disambiguation in one fell swoop", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "N. Habash and O. Rambow. 2005. Arabic tokeniza- tion, part-of-speech tagging, and morphological dis- ambiguation in one fell swoop. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Serial combination of rules and statistics: A case study in Czech tagging", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Krbec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Kv\u011bto\u0148", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Oliva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Petkevi\u010d", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Haji\u010d, P. Krbec, P. Kv\u011bto\u0148, K. Oliva, and V. Petkevi\u010d. 2001. Serial combination of rules and statistics: A case study in Czech tagging. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Statistical morphological disambiguation for agglutinative languages", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"Z" |
|
], |
|
"last": "Hakkani-T\u00fcr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Oflazer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "T\u00fcr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proc. of COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Z. Hakkani-T\u00fcr, K. Oflazer, and G. T\u00fcr. 2000. Statis- tical morphological disambiguation for agglutinative languages. In Proc. of COLING.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Best-first Word-lattice Parsing: Techniques for Integrated Syntactic Language Modeling", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Hall. 2005. Best-first Word-lattice Parsing: Tech- niques for Integrated Syntactic Language Modeling. Ph.D. thesis, Brown University.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Products of experts", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proc. of ICANN", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. E. Hinton. 1999. Products of experts. In Proc. of ICANN.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "PCFG models of linguistic tree representations", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Comp. Ling", |
|
"volume": "24", |
|
"issue": "4", |
|
"pages": "613--632", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Johnson. 1998. PCFG models of linguistic tree rep- resentations. Comp. Ling., 24(4):613-632.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Phonological rules and finite-state transducers", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Kaplan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1981, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. M. Kaplan and M. Kay. 1981. Phonological rules and finite-state transducers. Presented at LSA.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Accurate unlexicalized parsing", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "423--430", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Klein and C. D. Manning. 2003a. Accurate unlexical- ized parsing. In Proc. of ACL, pages 423-430.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Fast exact inference with a factored model for natural language parsing", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Advances in NIPS 15", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Klein and C. D. Manning. 2003b. Fast exact inference with a factored model for natural language parsing. In Advances in NIPS 15.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A general computational model of word-form recognition and production", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Koskenniemi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Koskenniemi. 1983. A general computational model of word-form recognition and production. Technical Report 11, University of Helsinki.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Applying conditional random fields to Japanese morphological analysis", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Yamamoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kudo, K. Yamamoto, and Y. Matsumoto. 2004. Ap- plying conditional random fields to Japanese morpho- logical analysis. In Proc. of EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proc. of ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Con- ditional random fields: Probabilistic models for seg- menting and labeling sequence data. In Proc. of ICML.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Rapid prototyping of a transferbased Hebrew-to-English machine translation system", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Wintner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Eytani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Peterson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Probst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. of TMI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Lavie, S. Wintner, Y. Eytani, E. Peterson, and K. Probst. 2004. Rapid prototyping of a transfer- based Hebrew-to-English machine translation system. In Proc. of TMI.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Learning morpholexical probabilities from an untagged corpus with an application to Hebrew", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Levinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Ornan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Itai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Comp. Ling", |
|
"volume": "21", |
|
"issue": "", |
|
"pages": "383--404", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Levinger, U. Ornan, and A. Itai. 1995. Learning mor- pholexical probabilities from an untagged corpus with an application to Hebrew. Comp. Ling., 21:383-404.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "The Penn Arabic Treebank: Building a largescale annotated Arabic corpus", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Maamouri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Bies", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Buckwalter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Mekki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. of NEMLAR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Maamouri, A. Bies, T. Buckwalter, and W. Mekki. 2004. The Penn Arabic Treebank: Building a large- scale annotated Arabic corpus. In Proc. of NEMLAR.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Finding consensus among words: Lattice-based word error minimization", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Mangu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proc. of ECSCT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Mangu, E. Brill, and A. Stolcke. 1999. Finding con- sensus among words: Lattice-based word error mini- mization. In Proc. of ECSCT.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Probabilistic CFG with latent annotations", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Matsuzaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Matsuzaki, Y. Miyao, and J. Tsujii. 2005. Probabilis- tic CFG with latent annotations. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Improved inference for unlexicalized parsing", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proc. of HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Petrov and D. Klein. 2007. Improved inference for unlexicalized parsing. In Proc. of HLT-NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Sparseval: Evaluation metrics for parsing speech", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Harper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Dorr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ostendorf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Krasnyanskaya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lease", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Shafran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Snover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Stewart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Yung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Roark, M. Harper, E. Charniak, B. Dorr, M. Johnson, J. Kahn, Y. Liu, M. Ostendorf, J. Hale, A. Krasnyan- skaya, M. Lease, I. Shafran, M. Snover, R. Stewart, and Lisa Yung. 2006. Sparseval: Evaluation metrics for parsing speech. In Proc. of LREC.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "A probabilistic morphological analyzer for Hebrew undotted texts", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Segal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Segal. 2000. A probabilistic morphological analyzer for Hebrew undotted texts. Master's thesis, Technion.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Building a treebank of modern Hebrew text", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sima'an", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Itai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Winter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Altman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Nativ", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Journal Traitement Automatique des Langues. Avail", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Sima'an, A. Itai, Y. Winter, A. Altman, and N. Na- tiv. 2001. Building a treebank of modern Hebrew text. Journal Traitement Automatique des Langues. Avail- able at http://mila.cs.technion.ac.il.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Bilingual parsing with factored estimation: Using English to parse Korean", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "49--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. A. Smith and N. A. Smith. 2004. Bilingual parsing with factored estimation: Using English to parse Ko- rean. In Proc. of EMNLP, pages 49-56.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Logarithmic opinion pools for conditional random fields", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Smith, T. Cohn, and M. Osborne. 2005a. Logarithmic opinion pools for conditional random fields. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Context-based morphological disambiguation with random fields", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Tromble", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of HLT-EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "N. A. Smith, D. A. Smith, and R. W. Tromble. 2005b. Context-based morphological disambiguation with random fields. In Proc. of HLT-EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Integrated morphological and syntactic disambiguation for Modern Hebrew", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Tsarfaty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of COLING-ACL Student Research Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Tsarfaty. 2006. Integrated morphological and syn- tactic disambiguation for Modern Hebrew. In Proc. of COLING-ACL Student Research Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Modeling and Learning Multilingual Inflectional Morphology in a Minimally Supervised Framework", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Wicentowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Wicentowski. 2002. Modeling and Learning Mul- tilingual Inflectional Morphology in a Minimally Su- pervised Framework. Ph.D. thesis, Johns Hopkins U.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Hebrew computational linguistics: Past and future. Art", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Wintner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Int. Rev", |
|
"volume": "21", |
|
"issue": "2", |
|
"pages": "113--138", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Wintner. 2004. Hebrew computational linguistics: Past and future. Art. Int. Rev., 21(2):113-138.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "A finite-state morphological grammar of Hebrew", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Yona", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Wintner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of ACL Workshop on Computational Approaches to Semitic Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Yona and S. Wintner. 2005. A finite-state morpholog- ical grammar of Hebrew. In Proc. of ACL Workshop on Computational Approaches to Semitic Languages.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "p h . m o d e l s y n t a x m o d e l s e g . a c c fi n e P O S F 1 c o a r s e P O" |
|
}, |
|
"TABREF1": { |
|
"text": "Oracle results of experiments on Hebrew (test data,", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td/><td>S F 1</td></tr><tr><td/><td/><td/><td>p a r s e F 1</td></tr><tr><td/><td>p uni L</td><td>pG van</td><td>90.7 73.4 78.5 64.3</td></tr><tr><td>f risk,\u03b1</td><td>m.-p crf L t.-p crf L</td><td colspan=\"2\">pG v=2 90.2 73.0 78.5 64.9 pG van 90.7 75.4 80.0 65.2 pG v=2 90.8 75.1 80.2 65.4 pG van 91.2 78.1 82.4 65.7</td></tr><tr><td/><td/><td colspan=\"2\">pG v=2 91.1 78.0 82.2 66.2</td></tr><tr><td/><td>p uni L</td><td>pG van</td><td>90.6 73.2 78.3 63.5</td></tr><tr><td>fvari,\u03b1</td><td>m.-p crf L t.-p crf L</td><td colspan=\"2\">pG v=2 90.2 72.8 78.4 64.4 pG van 92.0 76.6 81.5 66.9 pG v=2 91.9 76.2 81.6 66.9 pG van 91.8 79.1 83.2 66.5</td></tr><tr><td/><td/><td colspan=\"2\">pG v=2 91.7 78.7 83.0 67.4</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |