|
{ |
|
"paper_id": "N13-1004", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:41:18.144642Z" |
|
}, |
|
"title": "Simultaneous Word-Morpheme Alignment for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Elif", |
|
"middle": [], |
|
"last": "Eyig\u00f6z", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Computer Science University of Rochester Rochester", |
|
"location": { |
|
"postCode": "14627", |
|
"region": "NY" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Rochester Rochester", |
|
"location": { |
|
"postCode": "14627", |
|
"region": "NY" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Kemal", |
|
"middle": [], |
|
"last": "Oflazer", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Mellon University", |
|
"location": { |
|
"postBox": "PO Box 24866", |
|
"settlement": "Doha", |
|
"country": "Qatar" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Current word alignment models for statistical machine translation do not address morphology beyond merely splitting words. We present a two-level alignment model that distinguishes between words and morphemes, in which we embed an IBM Model 1 inside an HMM based word alignment model. The model jointly induces word and morpheme alignments using an EM algorithm. We evaluated our model on Turkish-English parallel data. We obtained significant improvement of BLEU scores over IBM Model 4. Our results indicate that utilizing information from morphology improves the quality of word alignments.", |
|
"pdf_parse": { |
|
"paper_id": "N13-1004", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Current word alignment models for statistical machine translation do not address morphology beyond merely splitting words. We present a two-level alignment model that distinguishes between words and morphemes, in which we embed an IBM Model 1 inside an HMM based word alignment model. The model jointly induces word and morpheme alignments using an EM algorithm. We evaluated our model on Turkish-English parallel data. We obtained significant improvement of BLEU scores over IBM Model 4. Our results indicate that utilizing information from morphology improves the quality of word alignments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "All current state-of-the-art approaches to SMT rely on an automatically word-aligned corpus. However, current alignment models do not take into account the morpheme, the smallest unit of syntax, beyond merely splitting words. Since morphology has not been addressed explicitly in word alignment models, researchers have resorted to tweaking SMT systems by manipulating the content and the form of what should be the so-called \"word\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Since the word is the smallest unit of translation from the standpoint of word alignment models, the central focus of research on translating morphologically rich languages has been decomposition of morphologically complex words into tokens of the right granularity and representation for machine translation. Chung and Gildea (2009) and Naradowsky and Toutanova (2011) use unsupervised methods to find word segmentations that create a one-to-one mapping of words in both languages. Al-Onaizan et al. (1999) , \u010cmejrek et al. (2003) , and Goldwater and McClosky (2005) manipulate morphologically rich languages by selective lemmatization. Lee (2004) attempts to learn the probability of deleting or merging Arabic morphemes for Arabic to English translation. Niessen and Ney (2000) split German compound nouns, and merge German phrases that correspond to a single English word. Alternatively, Yeniterzi and Oflazer (2010) manipulate words of the morphologically poor side of a language pair to mimic having a morphological structure similar to the richer side via exploiting syntactic structure, in order to improve the similarity of words on both sides of the translation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 310, |
|
"end": 333, |
|
"text": "Chung and Gildea (2009)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 338, |
|
"end": 369, |
|
"text": "Naradowsky and Toutanova (2011)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 483, |
|
"end": 507, |
|
"text": "Al-Onaizan et al. (1999)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 510, |
|
"end": 531, |
|
"text": "\u010cmejrek et al. (2003)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 538, |
|
"end": 567, |
|
"text": "Goldwater and McClosky (2005)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 638, |
|
"end": 648, |
|
"text": "Lee (2004)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 758, |
|
"end": 780, |
|
"text": "Niessen and Ney (2000)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 892, |
|
"end": 920, |
|
"text": "Yeniterzi and Oflazer (2010)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We present an alignment model that assumes internal structure for words, and we can legitimately talk about words and their morphemes in line with the linguistic conception of these terms. Our model avoids the problem of collapsing words and morphemes into one single category. We adopt a twolevel representation of alignment: the first level involves word alignment, the second level involves morpheme alignment in the scope of a given word alignment. The model jointly induces word and morpheme alignments using an EM algorithm. We develop our model in two stages. Our initial model is analogous to IBM Model 1: the first level is a bag of words in a pair of sentences, and the second level is a bag of morphemes. In this manner, we embed one IBM Model 1 in the scope of another IBM Model 1. At the second stage, by introducing distortion probabilities at the word level, we develop an HMM extension of the initial model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We evaluated the performance of our model on the Turkish-English pair both on hand-aligned data and by running end-to-end machine translation experiments. To evaluate our results, we created gold word alignments for 75 Turkish-English sentences. We obtain significant improvement of AER and BLEU scores over IBM Model 4. Section 2.1 introduces the concept of morpheme alignment in terms of its relation to word alignment. Section 2.2 presents the derivation of the EM algorithm and Section 3 presents the results of our experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2 Two-level Alignment Model (TAM)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Following the standard alignment models of Brown et al. 1993, we assume one-to-many alignment for both words and morphemes. A word alignment a w (or only a) is a function mapping a set of word positions in a source language sentence to a set of word positions in a target language sentence. A morpheme alignment a m is a function mapping a set of morpheme positions in a source language sentence to a set of morpheme positions in a target language sentence. A morpheme position is a pair of integers (j, k), which defines a word position j and a relative morpheme position k in the word at position j. The alignments below are depicted in Figures 1 and 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "a w (1) = 1 a m (2, 1) = (1, 1) a w (2) = 1 Figure 1 shows a word alignment between two sentences. Figure 2 shows the morpheme alignment between same sentences. We assume that all unaligned morphemes in a sentence map to a special null morpheme.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 52, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 99, |
|
"end": 107, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A morpheme alignment a m and a word alignment a w are compatible if and only if they satisfy the following conditions: If the morpheme alignment a m maps a morpheme of e to a morpheme of f , then the word alignment a w maps e to f . If the word alignment a w maps e to f , then the morpheme alignment a m maps at least one morpheme of e to a morpheme of f . If the word alignment a w maps e to null, then all of its morphemes are mapped to null. In sum, a morpheme alignment a m and a word alignment a w are compatible if and only if:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2200 j, k, m, n \u2208 N + , \u2203 s, t \u2208 N + [a m (j, k) = (m, n) \u21d2 a w (j) = m] \u2227 [a w (j) = m \u21d2 a m (j, s) = (m, t)] \u2227 [a w (j) = null \u21d2 a m (j, k) = null] (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Please note that, according to this definition of compatibility, 'a m (j, k) = null' does not necessarily imply 'a w (j) = null'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A word alignment induces a set of compatible morpheme alignments. However, a morpheme alignment induces a unique word alignment. Therefore, if a morpheme alignment a m and a word alignment a w are compatible, then the word alignment is a w is recoverable from the morpheme alignment a m .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The two-level alignment model (TAM), like IBM Model 1, defines an alignment between words of a sentence pair. In addition, it defines a morpheme alignment between the morphemes of a sentence pair.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The problem domain of IBM Model 1 is defined over alignments between words, which is depicted as the gray box in Figure 1 . In Figure 2 , the smaller boxes embedded inside the main box depict the new problem domain of TAM. Given the word alignments in Figure 1 , we are presented with a new alignment problem defined over their morphemes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 121, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 127, |
|
"end": 135, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 252, |
|
"end": 260, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The new alignment problem is constrained by the given word alignment. We, like IBM Model 1, adopt a bag-of-morphemes approach to this new problem. We thus embed one IBM Model 1 into the scope of another IBM Model 1, and formulate a second-order interpretation of IBM Model 1. TAM, like IBM Model 1, assumes that words and morphemes are translated independently of their context. The units of translation are both words and morphemes. Both the word alignment a w and the morpheme alignment a m are hidden variables that need to be learned from the data using the EM algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In IBM Model 1, p(e|f ), the probability of translating the sentence f into e with any alignment is computed by summing over all possible word alignments: In TAM, the probability of translating the sentence f into e with any alignment is computed by summing over all possible word alignments and all possible morpheme alignments that are compatible with a given word alignment a w :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "p(e|f ) = a p(a, e|f )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "p(e|f ) = aw p(a w , e|f ) am p(a m , e|a w , f ) (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where a m stands for a morpheme alignment. Since the morpheme alignment a m is in the scope of a given word alignment a w , a m is constrained by a w .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In IBM Model 1, we compute the probability of translating the sentence f into e by summing over all possible word alignments between the words of f and e:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(e|f ) = R(e, f ) |e| j=1 |f | i=0 t(e j |f i )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where t(e j | f i ) is the word translation probability of e j given f i . R(e, f ) substitutes P (le|l f ) (l f +1) le for easy readability. 1 In TAM, the probability of translating the sentence f into e is computed as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 143, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Word R(e, f ) |e| j=1 |f | i=0 t(e j |f i ) R(e j , f i ) |e j | k=1 |f i | n=0 t(e k j |f n i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Morpheme where f n i is the n th morpheme of the word at position i. The right part of this equation, the contribution of morpheme translation probabilities, is 1 le = |e| is the number of words in sentence e and l f = |f |.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "in the scope of the left part. In the right part, we compute the probability of translating the word f i into the word e j by summing over all possible morpheme alignments between the morphemes of e j and f i . R(e j , f i ) is equivalent to R(e, f ) except for the fact that its domain is not the set of sentences but the set of words. The length of words e j and f i in R(e j , f i ) are the number of morphemes of e j and f i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The left part, the contribution of word translation probabilities alone, equals Eqn. 3. Therefore, canceling the contribution of morpheme translation probabilities reduces TAM to IBM Model 1. In our experiments, we call this reduced version of TAM 'word-only' (IBM). TAM with the contribution of both word and morpheme translation probabilities, as the equation above, is called 'word-andmorpheme'. Finally, we also cancel out the contribution of word translation probabilities, which is called 'morpheme-only'. In the 'morpheme-only' version of TAM, t(e j |f i ) equals 1. Bellow is the equation of p(e|f ) in the morpheme-only model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "p(e|f ) = R(e, f ) |e| j=1 |f | i=0 |e j | k=1 |f i | n=0 R(e j , f i )t(e k j |f n i ) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Please note that, although this version of the twolevel alignment model does not use word translation probabilities, it is also a word-aware model, as morpheme alignments are restricted to correspond to a valid word alignment according to Eqn. 1. When presented with words that exhibit no morphology, the morpheme-only version of TAM is equivalent to IBM Model 1, as every single-morpheme word is itself a morpheme.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Deficiency and Non-Deficiency of TAM We present two versions of TAM, the word-and-morpheme and the morpheme-only versions. The word-and-morpheme version of the model is deficient whereas the morpheme-only model is not. The word-and-morpheme version is deficient, because some probability is allocated to cases where the morphemes generated by the morpheme model do not match the words generated by the word model. Moreover, although most languages exhibit morphology to some extent, they can be input to the algorithm without morpheme boundaries. This also causes deficiency in the word-and-morpheme version, as single morpheme words are generated twice, as a word and as a morpheme.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Nevertheless, we observed that the deficient version of TAM can perform as good as the nondeficient version of TAM, and sometimes performs better. This is not surprising, as deficient word alignment models such as IBM Model 3 or discriminative word alignment models work well in practice. Goldwater and McClosky (2005) proposed a morpheme aware word alignment model for language pairs in which the source language words correspond to only one morpheme. Their word alignment model is:", |
|
"cite_spans": [ |
|
{ |
|
"start": 289, |
|
"end": 318, |
|
"text": "Goldwater and McClosky (2005)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "P (e|f ) = K k=0 P (e k |f )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where e k is the k th morpheme of the word e. The morpheme-only version of our model is a generalization of this model. However, there are major differences in their and our implementation and experimentation. Their model assumes a fixed number of possible morphemes associated with any stem in the language, and if the morpheme e k is not present, it is assigned a null value.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The null word on the source side is also a null morpheme, since every single morpheme word is itself a morpheme. In TAM, the null word is the null morpheme that all unaligned morphemes align to.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Alignment", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In TAM, we collect counts for both word translations and morpheme translations. Unlike IBM Model 1, R(e, f ) = P (le|l f ) (l f +1) le does not cancel out in the counts of TAM. To compute the conditional probability P (l e |l f ), we assume that the length of word e (the number of morphemes of word e) varies according to a Poisson distribution with a mean that is linear with length of the word f .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Second-Order Counts", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "P (l e |l f ) = F Poisson (l e , r \u2022 l f ) = exp(\u2212r \u2022 l f )(r \u2022 l f ) le l e ! F Poisson (l e , r \u2022 l f )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Second-Order Counts", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "expresses the probability that there are l e morphemes in e if the expected number of morphemes in e is r \u2022 l f , where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Second-Order Counts", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "r = E[le] E[l f ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Second-Order Counts", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "is the rate parameter. Since l f is undefined for null words, we omit R(e, f ) for null words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Second-Order Counts", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We introduce T (e|f ), the translation probability of e given f with all possible morpheme alignments, as it will occur frequently in the counts of TAM:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Second-Order Counts", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "T (e|f ) = t(e|f )R(e, f ) |e| k=1 |f | n=0 t(e k |f n )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Second-Order Counts", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The role of T (e|f ) in TAM is very similar to the role of t(e|f ) in IBM Model 1. In finding the Viterbi alignments, we do not take max over the values in the summation in T (e|f ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Second-Order Counts", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Similar to IBM Model 1, we collect counts for word translations over all possible alignments, weighted by their probability. In Eqn. 5, the count function collects evidence from a sentence pair (e, f ) as follows: For all words e j of the sentence e and for all word alignments a w (j), we collect counts for a particular input word f and an output word e iff e j = e and f aw(j) = f .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Counts", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "c w (e|f ; e, f , a w ) = 1\u2264j\u2264|e| s.t. e=e j f =f aw (j) T (e|f ) |f | i=0 T (e|f i ) (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Counts", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "As for morpheme translations, we collect counts over all possible word and morpheme alignments, weighted by their probability. The morpheme count function below collects evidence from a word pair (e, f ) in a sentence pair (e, f ) as follows: For all words e j of the sentence e and for all word alignments a w (j), for all morphemes e k j of the word e j and for all morpheme alignments a m (j, k), we collect counts for a particular input morpheme g and an output morpheme h iff e j = e and f aw(j) = f and h = e k j and g = f am(j,k) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Counts", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "c m (h|g; e, f , a w , a m ) = 1\u2264j\u2264|e| s.t. e=e j f =f aw (j) 1\u2264k\u2264|e| s.t. h=e k j g=f am(j,k) T (e|f ) |f | i=0 T (e|f i ) t(h|g) |f | i=1 t(h|f i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Counts", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "The left part of the morpheme count function is the same as the word-counts in Eqn. 5. Since it does not contain h or g, it needs to be computed only once for each word. The right part of the equation is familiar from the IBM Model 1 counts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpheme Counts", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "We implemented TAM with the HMM extension (Vogel et al., 1996) at the word level. We redefine p(e|f ) as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 62, |
|
"text": "(Vogel et al., 1996)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Extension", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "R(e, f ) aw |e| j=1 p(s(j ) |C (f aw (j \u22121 ) )) t(e j |f aw(j) ) R(e j , f aw(j) ) am |e j | k=1 t(e k j |f am(j,k) )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Extension", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where the distortion probability depends on the relative jump width s(j) = a w (j \u2212 1) \u2212 a w (j), as opposed to absolute positions. The distortion probability is conditioned on class of the previous aligned word C (f aw(j\u22121) ). We used the mkcls tool in GIZA (Och and Ney, 2003) to learn the word classes. We formulated the HMM extension of TAM only at the word level. Nevertheless, the morpheme-only version of TAM also has an HMM extension, as it is also a word-aware model. To obtain the HMM extension of the morpheme-only version, substitute t(e j |f aw(j) ) with 1 in the equation above.", |
|
"cite_spans": [ |
|
{ |
|
"start": 259, |
|
"end": 278, |
|
"text": "(Och and Ney, 2003)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Extension", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "For the HMM to work correctly, we must handle jumping to and jumping from null positions. We learn the probabilities of jumping to a null position from the data. To compute the jump probability from a null position, we keep track of the nearest previous source word that does not align to null, and use the position of the previous non-null word to calculate the jump width. For this reason, we use a total of 2l f \u2212 1 words for the HMM model, the positions > l f stand for null positions between the words of f (Och and Ney, 2003). We do not allow null to null jumps. In sum, we enforce the following constraints:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Extension", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "P (i + l f + 1|i ) = p(null|i ) P (i + l f + 1|i + l f + 1) = 0 P (i|i + l f + 1) = p(i|i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Extension", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In the HMM extension of TAM, we perform forward-backward training using the word counts in Eqn. 5 as the emission probabilities. We calculate the posterior word translation probabilities for each e j and f i such that 1 \u2264 j \u2264 l e and 1 \u2264 i \u2264 2l f \u2212 1 as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Extension", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u03b3 j (i) = \u03b1 j (i)\u03b2 j (i) 2l f \u22121 m=1 \u03b1 j (m)\u03b2 j (m)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Extension", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where \u03b1 is the forward and \u03b2 is the backward probabilities of the HMM. The HMM word counts, in turn, are the posterior word translation probabilities obtained from the forward-backward training:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Extension", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "c w (e|f ; e, f , a w ) = 1\u2264j\u2264|e| s.t. e=e j f =f aw (j) \u03b3 j (a w (j))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Extension", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Likewise, we use the posterior probabilities in HMM morpheme counts:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Extension", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "c m (h|g; e, f , a w , a m ) = 1\u2264j\u2264|e| s.t. e=e j f =f aw (j) 1\u2264k\u2264|e| s.t. h=e k j g=f am(j,k) \u03b3 j (a w (j)) t(h|g) |f | i=1 t(h|f i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Extension", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The complexity of the HMM extension of TAM is O(n 3 m 2 ), where n is the number of words, and m is the number of morphemes per word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Extension", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Moore (2004) showed that the EM algorithm is particularly susceptible to overfitting in the case of rare words when training IBM Model 1. In order to prevent overfitting, we use the Variational Bayes extension of the EM algorithm (Beal, 2003) . This amounts to a small change to the M step of the original EM algorithm. We introduce Dirichlet priors \u03b1 to perform an inexact normalization by applying the function f (v) = exp(\u03c8(v)) to the expected counts collected in the E step, where \u03c8 is the digamma function (Johnson, 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 12, |
|
"text": "(2004)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 230, |
|
"end": 242, |
|
"text": "(Beal, 2003)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 511, |
|
"end": 526, |
|
"text": "(Johnson, 2007)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Bayes", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "\u03b8 x|y = f (E[c(x|y)] + \u03b1) f ( j E[c(x j |y)] + \u03b1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Bayes", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "We set \u03b1 to 10 \u221220 , a very low value, to have the effect of anti-smoothing, as low values of \u03b1 cause the algorithm to favor words which co-occur frequently and to penalize words that co-occur rarely.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Bayes", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "We trained our model on a Turkish-English parallel corpus of approximately 50K sentences, which have a maximum of 80 morphemes. Our parallel data consists mainly of documents in international relations and legal documents from sources such as the Turkish Ministry of Foreign Affairs, EU, etc. We followed a heavily supervised approach in morphological analysis. The Turkish data was first morphologically parsed (Oflazer, 1994) , then disambiguated (Sak et al., 2007) to select the contextually salient interpretation of words. In addition, we removed morphological features that are not explicitly marked by an overt morpheme -thus each feature symbol beyond the root part-of-speech corresponds to a morpheme. Line (b) of Figure 3 shows an example of a segmented Turkish sentence. The root is followed by its part-of-speech tag separated by a '+'. The derivational and inflectional morphemes that follow the root are separated by '-'s. In all experiments, we used the same segmented version of the Turkish data, because Turkish is an agglutinative language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 427, |
|
"text": "(Oflazer, 1994)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 449, |
|
"end": 467, |
|
"text": "(Sak et al., 2007)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 723, |
|
"end": 731, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For English, we used the CELEX database (Baayen et al., 1995) to segment English words into morphemes. We created two versions of the data: a segmented version that involves both derivational and inflectional morphology, and an unsegmented POS tagged version. The CELEX database provides tags for English derivational morphemes, which indicate their function: the part-of-speech category the morpheme attaches to and the part-of-speech category it returns. For example, in 'sparse+ity' = 'sparsity', the morpheme -ity attaches to an adjective to the right and returns a noun. This behavior is represented as 'N|A.' in CELEX, where '.' indicates the attachment position. We used these tags in addition to the surface forms of the English morphemes, in order to disambiguate multiple functions of a single surface morpheme.", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 61, |
|
"text": "(Baayen et al., 1995)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The English sentence in line (d) of Figure 3 exhibits both derivational and inflectional morphology. For example, 'author+ity+s'='authorities' has both an inflectional suffix -s and a derivational suffix -ity, whereas 'person+s' has only an inflectional suffix -s.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 44, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For both English and Turkish data, the dashes in Figure 3 stand for morpheme boundaries, therefore the strings between the dashes are treated as indi- Table 1 shows the number of words, the number of morphemes and the respective vocabulary sizes. The average number of morphemes in segmented Turkish words is 2.69, and the average length of segmented English words is 1.57.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 57, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 158, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We initialized our baseline word-only model with 5 iterations of IBM Model 1, and further trained the HMM extension (Vogel et al., 1996) for 5 iterations. We call this model 'baseline HMM' in the discussions. Similarly, we initialized the two versions of TAM with 5 iterations of the model explained in Section 2.2, and then trained the HMM extension of it as explained in Section 2.3 for 5 iterations. To obtain BLEU scores for TAM models and our implementation of the word-only model, i.e. baseline-HMM, we bypassed GIZA++ in the Moses toolkit (Och and Ney, 2003) . We also ran GIZA++ (IBM Model 1-4) on the data. We translated 1000 sentence test sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 136, |
|
"text": "(Vogel et al., 1996)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 546, |
|
"end": 565, |
|
"text": "(Och and Ney, 2003)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We evaluated the performance of our model in two different ways. First, we evaluated against gold word alignments for 75 Turkish-English sentences. Second, we used the word Viterbi alignments of our algorithm to obtain BLEU scores. Table 2 shows the AER (Och and Ney, 2003) of the word alignments of the Turkish-English pair and the translation performance of the word alignments learned by our models. We report the grow-diagfinal (Koehn et al., 2003) of the Viterbi alignments. In Table 2 , results obtained with different versions of the English data are represented as follows: 'Der' stands for derivational morphology, 'Inf' for inflectional morphology, and 'POS' for part-of-speech tags. 'Der+Inf' corresponds to the example sentence in line (d) in Figure 3 , and 'POS' to line (e). 'DIR' stands for models with Dirichlet priors, and 'NO DIR' stands for models without Dirichlet priors. All reported results are of the HMM extension of respective models. Table 2 shows that using Dirichlet priors hurts the AER performance of the word-and-morpheme model in all experiment settings, and benefits the morpheme-only model in the POS tagged experiment settings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 273, |
|
"text": "(Och and Ney, 2003)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 432, |
|
"end": 452, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 239, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 483, |
|
"end": 490, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 755, |
|
"end": 763, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 961, |
|
"end": 968, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In order to reduce the effect of nondeterminism, we run Moses three times per experiment setting, and report the highest BLEU scores obtained. Since the BLEU scores we obtained are close, we did a significance test on the scores (Koehn, 2004) . Table 2 visualizes the partition of the BLEU scores into statistical significance groups. If two scores within the same column have the same background color, or the border between their cells is removed, then the difference between their scores is not statistically significant. For example, the best BLEU scores, which are in bold, have white background. All scores in a given experiment setting without white background are significantly worse than the best score in that experiment setting, unless there is no border separating them from the best score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 242, |
|
"text": "(Koehn, 2004)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 252, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In all experiment settings, the TAM Models perform better than the baseline-HMM. Our experiments showed that the baseline-HMM benefits from Dirichlet priors to a larger extent than the TAM models. Dirichlet priors help reduce the overfitting in the case of rare words. The size of the word vocabulary is larger than the size of the morpheme vocabulary. Therefore the number of rare words is larger for words than it is for morphemes. Consequently, baseline-HMM, using only the word vocab- ulary, benefits from the use of Dirichlet priors more than the TAM models. In four out of eight experiment settings, the morpheme-only model performs better than the word-and-morpheme version of TAM. However, please note that our extensive experimentation with TAM models revealed that the superiority of the morpheme-only model over the word-andmorpheme model is highly dependent on segmentation accuracy, degree of segmentation, and morphological richness of languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Finally, we treated morphemes as words and trained IBM Model 4 on the morpheme segmented versions of the data. To obtain BLEU scores, we had to unsegment the translation output: we concatenated the prefixes to the morpheme to the right, and suffixes to the morpheme to the left. Since this process creates malformed words, the BLEU scores obtained are much lower than the scores obtained by IBM Model 4, the baseline and the TAM Models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We presented two versions of a two-level alignment model for morphologically rich languages. We ob-served that information provided by word translations and morpheme translations interact in a way that enables the model to be receptive to the partial information in rarely occurring words through their frequently occurring morphemes. We obtained significant improvement of BLEU scores over IBM Model 4. In conclusion, morphologically aware word alignment models prove to be superior to their word-only counterparts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Acknowledgments Funded by NSF award IIS-0910611. Kemal Oflazer acknowledges the generous support of the Qatar Foundation through Carnegie Mellon University's Seed Research program. The statements made herein are solely the responsibility of this author(s), and not necessarily that of Qatar Foundation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yaser", |
|
"middle": [], |
|
"last": "Al-Onaizan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Curin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Jahr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Melamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz-Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Purdy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Final Report", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yaser Al-Onaizan, Jan Curin, Michael Jahr, Kevin Knight, John Lafferty, Dan Melamed, Franz-Josef Och, David Purdy, Noah A. Smith, and David Yarowsky. 1999. Statistical machine translation. Technical report, Final Report, JHU Summer Work- shop.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The CELEX Lexical Database", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Baayen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Piepenbrock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Gulikers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R.H. Baayen, R. Piepenbrock, and L. Gulikers. 1995. The CELEX Lexical Database (Release 2) [CD-ROM].", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Linguistic Data Consortium, University of Pennsylvania", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Linguistic Data Consortium, University of Pennsylva- nia [Distributor], Philadelphia, PA.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Variational Algorithms for Approximate Bayesian Inference", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Beal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew J. Beal. 2003. Variational Algorithms for Ap- proximate Bayesian Inference. Ph.D. thesis, Univer- sity College London.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The mathematics of statistical machine translation: Parameter estimation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"A Della" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estima- tion. Computational Linguistics, 19(2):263-311.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Unsupervised tokenization for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Tagyoung", |
|
"middle": [], |
|
"last": "Chung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "718--726", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tagyoung Chung and Daniel Gildea. 2009. Unsu- pervised tokenization for machine translation. In EMNLP, pages 718-726.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Czech-English dependency-based machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Martin\u010dmejrek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji\u0159\u00ed", |
|
"middle": [], |
|
"last": "Cu\u0159\u00edn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Havelka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "83--90", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin\u010cmejrek, Jan Cu\u0159\u00edn, and Ji\u0159\u00ed Havelka. 2003. Czech-English dependency-based machine transla- tion. In EACL, pages 83-90, Morristown, NJ, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Improving statistical MT through morphological analysis", |
|
"authors": [ |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "HLT-EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sharon Goldwater and David McClosky. 2005. Improv- ing statistical MT through morphological analysis. In HLT-EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Why doesn't EM find good HMM POS-taggers?", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "296--305", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Johnson. 2007. Why doesn't EM find good HMM POS-taggers? In EMNLP-CoNLL, pages 296-305, Prague, Czech Republic, June. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Statistical phrase-based translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [ |
|
"Josef" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In HLT- NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Statistical significance tests for machine translation evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "388--395", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In EMNLP, pages 388-395.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Morphological analysis for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Young-Suk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Young-suk Lee. 2004. Morphological analysis for statis- tical machine translation. In HLT-NAACL, pages 57- 60.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Improving IBM word alignment model 1", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Moore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "518--525", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert C. Moore. 2004. Improving IBM word alignment model 1. In ACL, pages 518-525, Barcelona, Spain, July.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Unsupervised bilingual morpheme segmentation and alignment with context-rich Hidden Semi-Markov Models", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Naradowsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "895--904", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Naradowsky and Kristina Toutanova. 2011. Unsu- pervised bilingual morpheme segmentation and align- ment with context-rich Hidden Semi-Markov Models. In Proceedings of the 49th Annual Meeting of the As- sociation for Computational Linguistics: Human Lan- guage Technologies, pages 895-904, Portland, Ore- gon, USA, June. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Improving SMT quality with morpho-syntactic analysis", |
|
"authors": [ |
|
{ |
|
"first": "Sonja", |
|
"middle": [], |
|
"last": "Niessen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1081--1085", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sonja Niessen and Hermann Ney. 2000. Improving SMT quality with morpho-syntactic analysis. In Computa- tional Linguistics, pages 1081-1085, Morristown, NJ, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A systematic comparison of various statistical alignment models", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "1", |
|
"pages": "19--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, 29(1):19-51.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Two-level description of Turkish morphology", |
|
"authors": [ |
|
{ |
|
"first": "Kemal", |
|
"middle": [], |
|
"last": "Oflazer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Literary and Linguistic Computing", |
|
"volume": "9", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kemal Oflazer. 1994. Two-level description of Turkish morphology. Literary and Linguistic Computing, 9(2).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Morphological disambiguation of Turkish text with perceptron algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Ha\u015fim", |
|
"middle": [], |
|
"last": "Sak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tunga", |
|
"middle": [], |
|
"last": "G\u00fcng\u00f6r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Murat", |
|
"middle": [], |
|
"last": "Sara\u00e7lar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "CICLing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "107--118", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ha\u015fim Sak, Tunga G\u00fcng\u00f6r, and Murat Sara\u00e7lar. 2007. Morphological disambiguation of Turkish text with perceptron algorithm. In CICLing, pages 107-118, Berlin, Heidelberg. Springer-Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "HMM-based word alignment in statistical translation", |
|
"authors": [ |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Vogel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Tillmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "836--841", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical trans- lation. In COLING, pages 836-841.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Syntax-tomorphology mapping in factored phrase-based statistical machine translation from English to Turkish", |
|
"authors": [ |
|
{ |
|
"first": "Reyyan", |
|
"middle": [], |
|
"last": "Yeniterzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kemal", |
|
"middle": [], |
|
"last": "Oflazer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "454--464", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reyyan Yeniterzi and Kemal Oflazer. 2010. Syntax-to- morphology mapping in factored phrase-based statis- tical machine translation from English to Turkish. In ACL, pages 454-464, Stroudsburg, PA, USA. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Word alignment", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Morpheme alignment", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"text": "(a) Kas\u0131m 1996'da, T\u00fcrk makamlar\u0131,\u0130\u00e7i\u015fleri Bakanl\u0131g\u0131 b\u00fcnyesinde bir kay\u0131p ki\u015fileri arama birimi olu\u015fturdu. (b) Kas\u0131m+Noun 1996+Num-Loc ,+Punc T\u00fcrk+Noun makam+Noun-A3pl-P3sg ,+Punc\u0130\u00e7i\u015fi+Noun-A3pl-P3sg Bakanl\u0131k+Noun-P3sg b\u00fcnye+Noun-P3sg-Loc bir+Det kay\u0131p+Adj ki\u015fi+Noun-A3pl-Acc ara+Verb-Inf2 birim+Noun-P3sg olu\u015f+Verb-Caus-Past .+Punc (c) In November 1996 the Turkish authorities set up a missing persons search unit within the Ministry of the Interior. (d) in+IN November+NNP 1996+CD the+DT Turkish+JJ author+NN-ity+N|N.-NNS set+VB-VBD up+RP a+DT miss+VB-VBG+JJ person+NN-NNS search+NN unit+NN within+IN the+DT minister+NN-y+N|N. of+IN the+DT interior+NN .+. (e) In+IN November+NNP 1996+CD the+DT Turkish+JJ authorities+NNS set+VBD up+RP a+DT missing+JJ persons+NNS search+NN unit+NN within+IN the+DT Ministry+NNP of+IN the+DT Interior+NNP .+.", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"text": "Turkish-English data examples", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"text": "Data statistics visible units.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |