|
{ |
|
"paper_id": "D09-1023", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:38:55.655834Z" |
|
}, |
|
"title": "Feature-Rich Translation by Quasi-Synchronous Lattice Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University Pittsburgh", |
|
"location": { |
|
"postCode": "15213", |
|
"region": "PA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University Pittsburgh", |
|
"location": { |
|
"postCode": "15213", |
|
"region": "PA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present a machine translation framework that can incorporate arbitrary features of both input and output sentences. The core of the approach is a novel decoder based on lattice parsing with quasisynchronous grammar (Smith and Eisner, 2006), a syntactic formalism that does not require source and target trees to be isomorphic. Using generic approximate dynamic programming techniques, this decoder can handle \"non-local\" features. Similar approximate inference techniques support efficient parameter estimation with hidden variables. We use the decoder to conduct controlled experiments on a German-to-English translation task, to compare lexical phrase, syntax, and combined models, and to measure effects of various restrictions on nonisomorphism.", |
|
"pdf_parse": { |
|
"paper_id": "D09-1023", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present a machine translation framework that can incorporate arbitrary features of both input and output sentences. The core of the approach is a novel decoder based on lattice parsing with quasisynchronous grammar (Smith and Eisner, 2006), a syntactic formalism that does not require source and target trees to be isomorphic. Using generic approximate dynamic programming techniques, this decoder can handle \"non-local\" features. Similar approximate inference techniques support efficient parameter estimation with hidden variables. We use the decoder to conduct controlled experiments on a German-to-English translation task, to compare lexical phrase, syntax, and combined models, and to measure effects of various restrictions on nonisomorphism.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "We have seen rapid recent progress in machine translation through the use of rich features and the development of improved decoding algorithms, often based on grammatical formalisms. 1 If we view MT as a machine learning problem, features and formalisms imply structural independence assumptions, which are in turn exploited by efficient inference algorithms, including decoders (Koehn et al., 2003; Yamada and Knight, 2001) . Hence a tension is visible in the many recent research efforts aiming to decode with \"non-local\" features (Chiang, 2007; Huang and Chiang, 2007) . Lopez (2009) recently argued for a separation between features/formalisms (and the indepen-dence assumptions they imply) from inference algorithms in MT; this separation is widely appreciated in machine learning. Here we take first steps toward such a \"universal\" decoder, making the following contributions:", |
|
"cite_spans": [ |
|
{ |
|
"start": 379, |
|
"end": 399, |
|
"text": "(Koehn et al., 2003;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 400, |
|
"end": 424, |
|
"text": "Yamada and Knight, 2001)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 533, |
|
"end": 547, |
|
"text": "(Chiang, 2007;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 548, |
|
"end": 571, |
|
"text": "Huang and Chiang, 2007)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 574, |
|
"end": 586, |
|
"text": "Lopez (2009)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Arbitrary feature model ( \u00a72): We define a single, direct log-linear translation model (Papineni et al., 1997; Och and Ney, 2002) that encodes most popular MT features and can be used to encode any features on source and target sentences, dependency trees, and alignments. The trees are optional and can be easily removed, allowing simulation of \"string-to-tree,\" \"tree-to-string,\" \"treeto-tree,\" and \"phrase-based\" models, among many others. We follow the widespread use of log-linear modeling for direct translation modeling; the novelty is in the use of richer feature sets than have been previously used in a single model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 110, |
|
"text": "(Papineni et al., 1997;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 111, |
|
"end": 129, |
|
"text": "Och and Ney, 2002)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Decoding as QG parsing ( \u00a73-4): We present a novel decoder based on lattice parsing with quasisynchronous grammar (QG; Smith and Eisner, 2006) . 2 Further, we exploit generic approximate inference techniques to incorporate arbitrary \"nonlocal\" features in the dynamic programming algorithm (Chiang, 2007; Gimpel and Smith, 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 142, |
|
"text": "Smith and Eisner, 2006)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 145, |
|
"end": 146, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 304, |
|
"text": "(Chiang, 2007;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 328, |
|
"text": "Gimpel and Smith, 2009)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Parameter estimation ( \u00a75): We exploit similar approximate inference methods in regularized pseudolikelihood estimation (Besag, 1975) with hidden variables to discriminatively and efficiently train our model. Because we start with inference (the key subroutine in training), many other learning algorithms are possible.", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 133, |
|
"text": "(Besag, 1975)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Experimental platform ( \u00a76): The flexibility of our model/decoder permits carefully controlled experiments. We compare lexical phrase and dependency syntax features, as well as a novel com-\u03a3, T source and target language vocabularies, respectively Trans : \u03a3 \u222a {NULL} \u2192 2 T function mapping each source word to target words to which it may translate s = s0, . . . , sn \u2208 \u03a3 n source language sentence (s0 is the NULL word) t = t1, . . . , tm \u2208 T m target language sentence, translation of s \u03c4s : {1, . . . , n} \u2192 {0, . . . , n} dependency tree of s, where \u03c4s(i) is the index of the parent of si (0 is the root, $) \u03c4t : {1, . . . , m} \u2192 {0, . . . , m} dependency tree of t, where \u03c4t(i) is the index of the parent of ti (0 is the root, $) a : {1, . . . , m} \u2192 2 {1,...,n} alignments from words in t to words in s; \u2205 denotes alignment to NULL \u03b8 parameters of the model g trans (s, a, t) lexical translation features ( \u00a72.1):", |
|
"cite_spans": [ |
|
{ |
|
"start": 758, |
|
"end": 767, |
|
"text": "{1,...,n}", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 872, |
|
"end": 881, |
|
"text": "(s, a, t)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "f lex (s, t)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "word-to-word translation features for translating s as t f phr (s j i , t k ) phrase-to-phrase translation features for translating s j i as t k g lm (t) language model features ( \u00a72.2):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "f N (t j j\u2212N +1 ) N -gram probabilities g syn (t, \u03c4t)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "target syntactic features ( \u00a72.3):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "f att (t, j, t , k)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "syntactic features for attaching target word t at position k to target word t at position j f val (t, j, I) syntactic valence features with word t at position j having children I \u2286 {1, . . . , m} g reor (s, \u03c4s, a, t, \u03c4t) reordering features ( \u00a72.4):", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 220, |
|
"text": "(s, \u03c4s, a, t, \u03c4t)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "f dist (i, j)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "distortion features for a source word at position i aligned to a target word at position j g tree 2 (\u03c4s, a, \u03c4t) tree-to-tree syntactic features ( \u00a73):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "f qg (i, i , j, k)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "configuration features for source pair si/s i being aligned to target pair", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "tj/t k g cov (a) coverage features ( \u00a74.2) f scov (a), f zth (a), f sunc (a)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "counters for \"covering\" each s word each time, the zth time, and leaving it \"uncovered\" bination of the two. We quantify the effects of our approximate inference. We explore the effects of various ways of restricting syntactic non-isomorphism between source and target trees through the QG. We do not report state-of-the-art performance, but these experiments reveal interesting trends that will inform continued research. (Table 1 explains notation.) Given a sentence s and its parse tree \u03c4 s , we formulate the translation problem as finding the target sentence t * (along with its parse tree \u03c4 * t and alignment a * to the source tree) such that 3", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 423, |
|
"end": 440, |
|
"text": "(Table 1 explains", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "t * , \u03c4 * t , a * = argmax t,\u03c4 t ,a p(t, \u03c4 t , a | s, \u03c4 s ) (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In order to include overlapping features and permit hidden variables during training, we use a single globally-normalized conditional log-linear model. That is,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(t, \u03c4 t , a | s, \u03c4 s ) = exp{\u03b8 g(s, \u03c4 s , a, t, \u03c4 t )} a ,t ,\u03c4 t exp{\u03b8 g(s, \u03c4 s , a , t , \u03c4 t )}", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where the g are arbitrary feature functions and the \u03b8 are feature weights. If one or both parse trees or the word alignments are unavailable, they can be ignored or marginalized out as hidden variables.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In a log-linear model over structured objects, the choice of feature functions g has a huge effect 3 We assume in this work that s is parsed. In principle, we might include source-side parsing as part of decoding. on the feasibility of inference, including decoding. Typically these feature functions are chosen to factor into local parts of the overall structure. We next define some key features used in current MT systems, explaining how they factor. We will use subscripts on g to denote different groups of features, which may depend on subsets of the structures t, \u03c4 t , a, s, and \u03c4 s . When these features factor into parts, we will use f to denote the factored vectors, so that if x is an object that breaks into parts", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 100, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "{x i } i , then g(x) = i f (x i ). 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Classical lexical translation features depend on s and t and the alignment a between them. The simplest are word-to-word features, estimated as the conditional probabilities p(t | s) and p(s | t) for s \u2208 \u03a3 and t \u2208 T. Phrase-to-phrase features generalize these, estimated as p(t | s ) and p(s | t ) where s (respectively, t ) is a substring of s (t).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical Translations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A major difference between the phrase features used in this work and those used elsewhere is that we do not assume that phrases segment into disjoint parts of the source and target sentences 4 There are two conventional definitions of feature functions. One is to let the range of these functions be conditional probability estimates (Och and Ney, 2002) . These estimates are usually heuristic and inconsistent (Koehn et al., 2003) . An alternative is to instantiate features for different structural patterns (Liang et al., 2006; . This offers more expressive power but may require much more training data to avoid overfitting. For this reason, and to keep training fast, we opt for the former convention, though our decoder can handle both, and the factorings we describe are agnostic about this choice. (Koehn et al., 2003) ; they can overlap. 5 Additionally, since phrase features can be any function of words and alignments, we permit features that consider phrase pairs in which a target word outside the target phrase aligns to a source word inside the source phrase, as well as phrase pairs with gaps (Chiang, 2005; Ittycheriah and Roukos, 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 192, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 334, |
|
"end": 353, |
|
"text": "(Och and Ney, 2002)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 431, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 510, |
|
"end": 530, |
|
"text": "(Liang et al., 2006;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 806, |
|
"end": 826, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1109, |
|
"end": 1123, |
|
"text": "(Chiang, 2005;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1124, |
|
"end": 1153, |
|
"text": "Ittycheriah and Roukos, 2007)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical Translations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Lexical translation features factor as in Eq. 3 (Tab. 2). We score all phrase pairs in a sentence pair that pair a target phrase with the smallest source phrase that contains all of the alignments in the target phrase; if k:i\u2264k\u2264j a(k) = \u2205, no phrase feature fires for t j i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical Translations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "N -gram language models have become standard in machine translation systems. For bigrams and trigrams (used in this paper), the factoring is in Eq. 4 (Tab. 2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "N -gram Language Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "There have been many features proposed that consider source-and target-language syntax during translation. Syntax-based MT systems often use features on grammar rules, frequently maximum likelihood estimates of conditional probabilities in a probabilistic grammar, but other syntactic features are possible. For example, Quirk et al. (2005) use features involving phrases and sourceside dependency trees and Mi et al. (2008) use features from a forest of parses of the source sentence. There is also substantial work in the use of target-side syntax (Galley et al., 2006; Shen et al., 2008) . In addition, researchers have recently added syntactic features to phrase-based and hierarchical phrase-based models (Gimpel and Smith, 2008; Haque et al., 2009; Chiang et al., 2008) . In this work, we focus on syntactic features of target-side dependency trees, \u03c4 t , along with the words t. These include attachment features that relate a word to its syntactic parent, and valence features. They factor as in Eq. 5 (Tab. 2). Features that consider only target-side syntax and words without considering s can be seen as \"syntactic language model\" features (Shen et al., 2008) . 5 Segmentation might be modeled as a hidden variable in future work. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 321, |
|
"end": 340, |
|
"text": "Quirk et al. (2005)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 408, |
|
"end": 424, |
|
"text": "Mi et al. (2008)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 550, |
|
"end": 571, |
|
"text": "(Galley et al., 2006;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 572, |
|
"end": 590, |
|
"text": "Shen et al., 2008)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 710, |
|
"end": 734, |
|
"text": "(Gimpel and Smith, 2008;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 735, |
|
"end": 754, |
|
"text": "Haque et al., 2009;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 755, |
|
"end": 775, |
|
"text": "Chiang et al., 2008)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1150, |
|
"end": 1169, |
|
"text": "(Shen et al., 2008)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 1172, |
|
"end": 1173, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Target Syntax", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "g trans (s, a, t) = P m j=1 P i\u2208a(j) f lex (si, tj) (3) + P i,j:1\u2264i<j\u2264m f phr (s last(i,j) first(i,j) , t j i ) g lm (t) = P N \u2208{2,3} P m+1 j=1 f N (t j j\u2212N +1 ) (4) g syn (t, \u03c4t) = P m j=1 f att (tj, j, t \u03c4 t (j) , \u03c4t(j)) +f val (tj, j, \u03c4 \u22121 t (j)) (5) g reor (s, \u03c4s, a, t, \u03c4t) = P m j=1 P i\u2208a(j) f dist (i, j) (6) g tree 2 (\u03c4s, a, \u03c4t) = m X j=1 f qg (a(j), a(\u03c4t(j)), j, \u03c4t(j)) (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Target Syntax", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "f . x j i denotes xi, . . . xj in sequence x = x1, . . . . first(i, j) = min k:i\u2264k\u2264j (min(a(k))) and last(i, j) = max k:i\u2264k\u2264j (max(a(k))).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Target Syntax", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Reordering features take many forms in MT. In phrase-based systems, reordering is accomplished both within phrase pairs (local reordering) as well as through distance-based distortion models (Koehn et al., 2003) and lexicalized reordering models (Koehn et al., 2007) . In syntax-based systems, reordering is typically parameterized by grammar rules. For generality we permit these features to \"see\" all structures and denote them g reor (s, \u03c4 s , a, t, \u03c4 t ). Eq. 6 (Tab. 2) shows a factoring of reordering features based on absolute positions of aligned words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 211, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 266, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reordering", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "We turn next to the \"backbone\" model for our decoder; the formalism and the properties of its decoding algorithm will inspire two additional sets of features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reordering", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "A quasi-synchronous dependency grammar (QDG; Smith and Eisner, 2006 ) specifies a conditional model p(t, \u03c4 t , a | s, \u03c4 s ). Given a source sentence s and its parse \u03c4 s , a QDG induces a probabilistic monolingual dependency grammar over sentences \"inspired\" by the source sentence and tree. We denote this grammar by G s,\u03c4s ; its (weighted) language is the set of translations of s. Each word generated by G s,\u03c4s is annotated with a \"sense,\" which consists of zero or more words from s. The senses imply an alignment (a) between words in t and words in s, or equivalently, between nodes in \u03c4 t and nodes in \u03c4 s . In principle, any portion of \u03c4 t may align to any portion of \u03c4 s , but in practice we often make restrictions on the alignments to simplify computation. Smith and Eisner, for example, restricted |a(j)| for all words t j to be at most one, so that each target word aligned to at most one source word, which we also do here. 6 Which translations are possible depends heavily on the configurations that the QDG permits. Formally, for a parent-child pair t \u03c4 t (j) , t j in \u03c4 t , we consider the relationship between a(\u03c4 t (j)) and a(j), the source-side words to which t \u03c4 t (j) and t j align. If, for example, we require that, for all j, a(\u03c4 t (j)) = \u03c4 s (a(j)) or a(j) = 0, and that the root of \u03c4 t must align to the root of \u03c4 s or to NULL, then strict isomorphism must hold between \u03c4 s and \u03c4 t , and we have implemented a synchronous CF dependency grammar (Alshawi et al., 2000; Ding and Palmer, 2005) . Smith and Eisner (2006) grouped all possible configurations into eight classes and explored the effects of permitting different sets of classes in word alignment. (\"a(\u03c4 t (j)) = \u03c4 s (a(j))\" corresponds to their \"parent-child\" configuration; see Fig. 3 in Smith and Eisner (2006) for illustrations of the rest.) More generally, we can define features on tree pairs that factor into these local configurations, as shown in Eq. 7 (Tab. 2).", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 67, |
|
"text": "Smith and Eisner, 2006", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 936, |
|
"end": 937, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1468, |
|
"end": 1490, |
|
"text": "(Alshawi et al., 2000;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1491, |
|
"end": 1513, |
|
"text": "Ding and Palmer, 2005)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1516, |
|
"end": 1539, |
|
"text": "Smith and Eisner (2006)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 1771, |
|
"end": 1794, |
|
"text": "Smith and Eisner (2006)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1761, |
|
"end": 1767, |
|
"text": "Fig. 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quasi-Synchronous Grammars", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Note that the QDG instantiates the model in Eq. 2. Of the features discussed in \u00a72, f lex , f att , f val , and f dist can be easily incorporated into the QDG as described while respecting the independence assumptions implied by the configuration features. The others (f phr , f 2 , and f 3 ) are nonlocal, or involve parts of the structure that, from the QDG's perspective, are conditionally independent given intervening material. Note that \"nonlocality\" is relative to a choice of formalism; in \u00a72 we did not commit to any formalism, so it is only now that we can describe phrase and N -gram features as non-local. Non-local features will present a challenge for decoding and training ( \u00a74.3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quasi-Synchronous Grammars", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Given a sentence s and its parse \u03c4 s , at decoding time we seek the target sentence t * , the target tree \u03c4 * t , and the alignments a * that are most probable, as defined in Eq. 1. 7 (In \u00a75 we will consider kbest and all-translations variations on this prob-lem.) As usual, the normalization constant is not required for decoding; it suffices to solve:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "t * , \u03c4 * t , a * = argmax t,\u03c4 t ,a \u03b8 g(s, \u03c4 s , a, t, \u03c4 t ) (8)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For a QDG model, the decoding problem has not been addressed before. It equates to finding the most probable derivation under the s/\u03c4 s -specific grammar G s,\u03c4s . We solve this by lattice parsing, assuming that an upper bound on m (the length of t) is known. The advantage offered by this approach (like most other grammar-based translation approaches) is that decoding becomes dynamic programming (DP), a technique that is both widely understood in NLP and for which practical, efficient, generic techniques exist. A major advantage of DP is that, with small modifications, summing over structures is also possible with \"inside\" DP algorithms. We will exploit this in training ( \u00a75). Efficient summing opens up many possibilities for training \u03b8, such as likelihood and pseudolikelihood, and provides principled ways to handle hidden variables during learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We decode by performing lattice parsing on a lattice encoding the set of possible translations. The lattice is a weighted \"sausage\" lattice that permits sentences up to some maximum length ; is derived from the source sentence length. Let the states be numbered 0 to ; states from \u03c1 to are final states (for some \u03c1 \u2208 (0, 1)). For every position between consecutive states j \u2212 1 and j (0 < j \u2264 ), and for every word s i in s, and for every word t \u2208 Trans(s i ), we instantiate an arc annotated with t and i. The weight of such an arc is exp{\u03b8 f }, where f is the sum of feature functions that fire when s i translates as t in target position j (e.g., f lex (s i , t) and f dist (i, j)).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation as Monolingual Parsing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Given the lattice and G s,\u03c4s , lattice parsing is a straightforward generalization of standard context-free dependency parsing DP algorithms (Eisner, 1997) . This decoder accounts for f lex , f att , f val , f dist , and f qg as local features. Figure 1 gives an example, showing a German sentence and dependency tree from an automatic parser, an English reference, and a lattice representing possible translations. In each bundle, the arcs are listed in decreasing order according to weight and for clarity only the first five are shown. Decoder output:", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 155, |
|
"text": "(Eisner, 1997)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 253, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Translation as Monolingual Parsing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Figure 1: Decoding as lattice parsing, with the highest-scoring translation denoted by black lattice arcs (others are grayed out) and thicker blue arcs forming a dependency tree over them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation as Monolingual Parsing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "selected at each position and a dependency tree over them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation as Monolingual Parsing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Most MT decoders enforce a notion of \"coverage\" of the source sentence during translation: all parts of s should be aligned to some part of t (alignment to NULL incurs an explicit cost). Phrase-based systems such as Moses (Koehn et al., 2007) explicitly search for the highest-scoring string in which all source words are translated. Systems based on synchronous grammars proceed by parsing the source sentence with the synchronous grammar, ensuring that every phrase and word has an analogue in \u03c4 t (or a deliberate choice is made by the decoder to translate it to NULL). In such systems, we do not need to use features to implement source-side coverage, as it is assumed as a hard constraint always respected by the decoder. Our QDG decoder has no way to enforce coverage; it does not track any kind of state in \u03c4 s apart from a single recently aligned word. This is a problem with other direct translation models, such as IBM model 1 used as a direct model rather than a channel model (Brown et al., 1993) . This sacrifice is the result of our choice to use a conditional model ( \u00a72).", |
|
"cite_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 242, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 988, |
|
"end": 1008, |
|
"text": "(Brown et al., 1993)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source-Side Coverage Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The solution is to introduce a set of coverage features g cov (a). Here, these include:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source-Side Coverage Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 A counter for the number of times each source word is covered: f scov (a) = n i=1 |a \u22121 (i)|. \u2022 Features that fire once when a source word is covered the zth time (z \u2208 {2, 3, 4}) and fire again all subsequent times it is covered; these are denoted f 2nd , f 3rd , and f 4th .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source-Side Coverage Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 A counter of uncovered source words:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source-Side Coverage Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "f sunc (a) = n i=1 \u03b4(|a \u22121 (i)|, 0).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source-Side Coverage Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Of these, only f scov is local.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source-Side Coverage Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The lattice QDG parsing decoder incorporates many of the features we have discussed, but not all of them. Phrase lexicon features f phr , language model features f N for N > 1, and most coverage features are non-local with respect to our QDG. Recently Chiang (2007) introduced \"cube pruning\" as an approximate decoding method that extends a DP decoder with the ability to incorporate features that break the Markovian independence assumptions DP exploits. Techniques like cube pruning can be used to include the non-local features in our decoder. 8", |
|
"cite_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 265, |
|
"text": "Chiang (2007)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-Local Features", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Training requires us to learn values for the parameters \u03b8 in Eq. 2. Given T training examples of the form", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "t (i) , \u03c4 (i) t , s (i) , \u03c4", |
|
"eq_num": "(i)" |
|
} |
|
], |
|
"section": "Training", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "s , for i = 1, ..., T , maximum likelihood estimation for this model consists of solving Eq. 9 (Tab. 3). 9 Note that the", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "LL(\u03b8) = T X i=1 log p(t (i) , \u03c4 (i) t | s (i) , \u03c4 (i) s ) = T X i=1 log P a exp{\u03b8 g(s (i) , \u03c4 (i) s , a, t (i) , \u03c4 (i) t )} P t,\u03c4 t ,a exp{\u03b8 g(s (i) , \u03c4 (i) s , a, t, \u03c4t)} = T X i=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "log \"numerator\" \"denominator\" (9) alignments are treated as a hidden variable to be marginalized out. 10 Optimization problems of this form are by now widely known in NLP (Koo and Collins, 2005) , and have recently been used for machine translation as well . Such problems are typically solved using variations of gradient ascent; in our experiments, we will use an online method called stochastic gradient ascent (SGA). This requires us to calculate the function's gradient (vector of first derivatives) with respect to \u03b8. 11", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 194, |
|
"text": "(Koo and Collins, 2005)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "PL(\u03b8) = T X i=1 log \" X a p(t (i) , a | \u03c4 (i) t , s (i) , \u03c4 (i) s ) \u00ab + T X i=1 log \" X a p(\u03c4 (i) t , a | t (i) , s (i) , \u03c4 (i) s ) \u00ab (10) \"denominator\" of term 1 in Eq. 10 = n X i=0 X t \u2208Trans(s i ) S(\u03c4 \u22121 t (0), i, t ) \u00d7 exp n \u03b8 `f lex (si, t ) + f att ($, 0, t , k) + f qg (0, i, 0, k)\u00b4o (11) S(j, i, t) = Y k\u2208\u03c4 \u22121 t (j) n X i =0 X t \u2208Trans(s i ) S(k, i , t ) \u00d7 exp \uf6be \u03b8 \" f lex (s i , t ) + f att (t, j, t , k)+ f val (t, j, \u03c4 \u22121 t (j)) + f qg (i, i , j, k) \u00abff (12) S(j, i, t) = exp n \u03b8 `f val (t, j, \u03c4 \u22121 t (j))\u00b4o if \u03c4 \u22121 t (j) = \u2205 (13)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Computing the numerator in Eq. 9 involves summing over all possible alignments; with QDG and a hard bound of 1 on |a(j)| for all j, a fast \"inside\" DP solution is known (Smith and Eisner, 2006; Wang et al., 2007) . It runs in O(mn 2 ) time and O(mn) space.", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 193, |
|
"text": "(Smith and Eisner, 2006;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 212, |
|
"text": "Wang et al., 2007)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Computing the denominator in Eq. 9 requires summing over all word sequences and dependency trees for the target language sentence and all word alignments between the sentences. With a maximum length imposed, this is tractable using the \"inside\" version of the maximizing DP algorithm of Sec. 4, but it is prohibitively expensive. We therefore optimize pseudo-likelihood instead, making the following approximation (Be-10 Alignments could be supplied by automatic word alignment algorithms. We chose to leave them hidden so that we could make the best use of our parsed training data when configuration constraints are imposed, since it is not always possible to reconcile automatic word alignments with automatic parses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "11 When the function's value is computed by \"inside\" DP, the corresponding \"outside\" algorithm can be used to obtain the gradient. Because outside algorithms can be automatically derived from inside ones, we discuss only inside algorithms in this paper; see Eisner et al. (2005) . sag, 1975) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 278, |
|
"text": "Eisner et al. (2005)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 291, |
|
"text": "sag, 1975)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "p(t, \u03c4 t | s, \u03c4 s ) \u2248 p(t | \u03c4 t , s, \u03c4 s ) \u00d7 p(\u03c4 t | t, s, \u03c4 s )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Plugging this into Eq. 9, we arrive at Eq. 10 (Tab. 3). The two parenthesized terms in Eq. 10 each have their own numerators and denominators (not shown). The numerators are identical to each other and to that in Eq. 9. The denominators are much more manageable than in Eq. 9, never requiring summation over more than two structures at a time. We must sum over target word sequences and word alignments (with fixed \u03c4 t ), and separately over target trees and word alignments (with fixed t).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The summation over target word sequences and alignments given fixed \u03c4 t bears a resemblance to the inside algorithm, except that the tree structure is fixed (Pereira and Schabes, 1992) . Let S(j, i, t) denote the sum of all translations rooted at position j in \u03c4 t such that a(j) = i and t j = t.", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 184, |
|
"text": "(Pereira and Schabes, 1992)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summing over t and a", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Tab. 3 gives the equations for this DP: Eq. 11 is the quantity of interest, Eq. 12 is the recursion, and Eq. 13 shows the base cases for leaves of \u03c4 t .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summing over t and a", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Letting q = max 0\u2264i\u2264n |Trans(s i )|, this algorithm runs in O(mn 2 q 2 ) time and O(mnq) space. For efficiency we place a hard upper bound on q during training (details in \u00a76).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summing over t and a", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For the summation over dependency trees and alignments given fixed t, required for p(\u03c4 t | t, s, \u03c4 s ), we perform \"inside\" lattice parsing with G s,\u03c4s . The technique is the summing variant of the decoding method in \u00a74, except for each state j, the sausage lattice only includes arcs from j \u2212 1 to j that are labeled with the known target word t j . If a is the number of arcs in the lattice, which is O(mn), this algorithm runs in O(a 3 ) time and requires O(a 2 ) space. Because we use a hard upper bound on |Trans(s)| for all s \u2208 \u03a3, this summation is much faster in practice than the one over words and alignments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summing over \u03c4 t and a", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "So far, all of our algorithms have exploited DP, disallowing any non-local features (e.g., f phr , f N for N > 1, f zth , f sunc ). We recently proposed \"cube summing,\" an approximate technique that permits the use of non-local features for inside DP algorithms (Gimpel and Smith, 2009) . Cube summing is based on a slightly less greedy variation of cube pruning (Chiang, 2007) that maintains k-best lists of derivations for each DP chart item. Cube summing augments the k-best list with a residual term that sums over remaining structures not in the k-best list, albeit without their non-local features. Using the machinery of cube summing, it is straightforward to include the desired non-local features in the summations required for pseudolikelihood, as well as to compute their approximate gradients.", |
|
"cite_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 286, |
|
"text": "(Gimpel and Smith, 2009)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 363, |
|
"end": 377, |
|
"text": "(Chiang, 2007)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling Non-Local Features", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Our approach permits an alternative to minimum error-rate training (MERT; Och, 2003) ; it is discriminative but handles latent structure and regularization in more principled ways. The pseudolikelihood calculations for a sentence pair, taken together, are faster than (k-best) decoding, making SGA's inner loop faster than MERT's inner loop.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 84, |
|
"text": "Och, 2003)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling Non-Local Features", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Our decoding framework allows us to perform many experiments with the same feature representation and inference algorithms, including combining and comparing phrase-based and syntax-based features and examining how isomorphism constraints of synchronous formalisms affect translation output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We use the German-English portion of the Basic Travel Expression Corpus (BTEC). The corpus has approximately 100K sentence pairs. We filter sentences of length more than 15 words, which only removes 6% of the data. We end up with a training set of 82,299 sentences, a develop-ment set of 934 sentences, and a test set of 500 sentences. We evaluate translation output using case-insensitive BLEU (Papineni et al., 2001) , as provided by NIST, and METEOR (Banerjee and Lavie, 2005) , version 0.6, with Porter stemming and WordNet synonym matching.", |
|
"cite_spans": [ |
|
{ |
|
"start": 395, |
|
"end": 418, |
|
"text": "(Papineni et al., 2001)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 479, |
|
"text": "(Banerjee and Lavie, 2005)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Evaluation", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Our base system uses features as discussed in \u00a72. To obtain lexical translation features g trans (s, a, t) , we use the Moses pipeline (Koehn et al., 2007) . We perform word alignment using GIZA++ (Och and Ney, 2003) , symmetrize the alignments using the \"grow-diag-final-and\" heuristic, and extract phrases up to length 3. We define f lex by the lexical probabilities p(t | s) and p(s | t) estimated from the symmetrized alignments. After discarding phrase pairs with only one target-side word (since we only allow a target word to align to at most one source word), we define f phr by 8 features: {2, 3} target words \u00d7 phrase conditional and \"lexical smoothing\" probabilities \u00d7 two conditional directions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 155, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 216, |
|
"text": "(Och and Ney, 2003)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 106, |
|
"text": "(s, a, t)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Bigram and trigam language model features, f 2 and f 3 , are estimated using the SRI toolkit (Stolcke, 2002) with modified Kneser-Ney smoothing (Chen and Goodman, 1998) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 168, |
|
"text": "(Chen and Goodman, 1998)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "For our target-language syntactic features g syn , we use features similar to lexicalized CFG events (Collins, 1999) , specifically following the dependency model of Klein and Manning (2004) . These include probabilities associated with individual attachments (f att ) and child-generation valence probabilities (f val ). These probabilities are estimated on the training corpus parsed using the Stanford factored parser (Klein and Manning, 2003) . The same probabilities are also included using 50 hard word classes derived from the parallel corpus using the GIZA++ mkcls utility (Och and Ney, 2003) . In total, there are 7 lexical and 7 word-class syntax features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 116, |
|
"text": "(Collins, 1999)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 166, |
|
"end": 190, |
|
"text": "Klein and Manning (2004)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 421, |
|
"end": 446, |
|
"text": "(Klein and Manning, 2003)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 581, |
|
"end": 600, |
|
"text": "(Och and Ney, 2003)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "For reordering, we use a single absolute distortion feature f dist (i, j) that returns |i\u2212j| whenever a(j) = i and i, j > 0. (Unlike the other feature functions, which returned probabilities, this feature function returns a nonnegative integer.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The tree-to-tree syntactic features g tree 2 in our model are binary features f qg that fire for particular QG configurations. We use one feature for each of the configurations in (Smith and Eisner, 2006) , adding 7 additional features that score configura- ", |
|
"cite_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 204, |
|
"text": "(Smith and Eisner, 2006)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Our model permits training the system on the full set of parallel data, but we instead use the parallel data to estimate feature functions and learn \u03b8 on the development set. 12 We trained using three iterations of SGA over the development data with a batch size of 1 and a fixed step size of 0.01. We used 2 regularization with a fixed, untuned coefficient of 0.1. Cube summing used a 10-best list for training and a 7-best list for decoding unless otherwise specified.", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 177, |
|
"text": "12", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Procedure", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "To obtain the translation lexicon (Trans) we first included the top three target words t for each s using p(s | t) \u00d7 p(t | s) to score target words. For any training sentence s, t and t j for which", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Procedure", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "t j \u2208 n i=1 Trans(s i ), we added t j to Trans(s i ) for i = argmax i \u2208I p(s i |t j ) \u00d7 p(t j |s i ), where I = {i : 0 \u2264 i \u2264 n \u2227 |Trans(s i )| < q i }.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Procedure", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "We used q 0 = 10 and q >0 = 5, restricting |Trans(NULL)| \u2264 10 and |Trans(s)| \u2264 5 for any s \u2208 \u03a3. This made 191 of the development sentences unreachable by the model, leaving 743 sentences for learning \u03b8.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Procedure", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "During decoding, we generated lattices with all t \u2208 Trans(s i ) for 0 \u2264 i \u2264 n, for every position. We used \u03c1 = 0.9, causing states within 90% of the source sentence length to be final states. Between each pair of consecutive states, we pruned edges that fell outside a beam of 70% of the sum of edge weights (see \u00a74.1; edge weights use f lex , f dist , and f scov ) of all edges between those two states.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Procedure", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Our first set of experiments compares feature sets commonly used in phrase-and syntax-based translation. In particular, we compare the effects of combining phrase features and syntactic features. The base model contains f lex , g lm , g reor , and g cov . The results are shown in Table 4 . The second row contains scores when adding in the eight f phr features. The second column shows scores when adding the 14 target syntax features (f att and f val ), and the third column adds to them the 14 additional tree-to-tree features (f qg ). We find large gains in BLEU by adding more features, and find that gains obtained through phrase features and syntactic features are partially additive, suggesting that these feature sets are making complementary contributions to translation quality.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 281, |
|
"end": 288, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Set Comparison", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "For models without syntactic features, we constrained the decoder to produce dependency trees in which every word's parent is immediately to its right and ignored syntactic features while scoring structures. This causes decoding to proceed leftto-right in the lattice, the way phrase-based decoders operate. Since these models do not search over trees, they are substantially faster during decoding than those that use syntactic features and do not require any pruning of the lattice. Therefore, we explored varying the value of k used during k-best cube decoding; results are shown in Fig. 2 . Scores improve when we increase k up to 10, but not much beyond, and there is still a substantial gap (2.5 BLEU) between using phrase features with k = 20 and using all features with k = 5. Models without syntax perform poorly when using a very small k, due to their reliance on non-local language model and phrase features. By contrast, models with syntactic features, which are local in our decoder, perform relatively well even with k = 1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 586, |
|
"end": 592, |
|
"text": "Fig. 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Varying k During Decoding", |
|
"sec_num": "6.5" |
|
}, |
|
{ |
|
"text": "We next compare different constraints on isomorphism between the source and target dependency Table 5 : QG configuration comparison. The name of each configuration, following Smith and Eisner (2006) , refers to the relationship between a(\u03c4t(j)) and a(j) in \u03c4s.", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 198, |
|
"text": "Smith and Eisner (2006)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 101, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "QG Configuration Comparison", |
|
"sec_num": "6.6" |
|
}, |
|
{ |
|
"text": "trees. To do this, we impose harsh penalties on some QDG configurations ( \u00a73) by fixing their feature weights to \u22121000. Hence they are permitted only when absolutely necessary in training and rarely in decoding. 13 Each model uses all phrase and syntactic features; they differ only in the sets of configurations which have fixed negative weights. Tab. 5 shows experimental results. The base \"synchronous\" model permits parent-child (a(\u03c4 t (j)) = \u03c4 s (a(j))), any configuration where a(j) = 0, including both words being linked to NULL, and requires the root word in \u03c4 t to be linked to the root word in \u03c4 s or to NULL(5 of our 14 configurations). The second row allows any configuration involving NULL, including those where t j aligns to a non-NULL word in s and its parent aligns to NULL, and allows the root in \u03c4 t to be linked to any word in \u03c4 s . Each subsequent row adds additional configurations (i.e., trains its \u03b8 rather than fixing it to \u22121000). In general, we see large improvements as we permit more configurations, and the largest jump occurs when we add the \"sibling\" configuration (\u03c4 s (a(\u03c4 t (j))) = \u03c4 s (a(j))). The BLEU score does not increase, however, when we permit all configurations in the final row of the table, and the METEOR score increases only slightly. While allowing certain categories of non-isomorphism clearly seems helpful, permitting arbitrary violations does not appear to be necessary for this dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 214, |
|
"text": "13", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "QG Configuration Comparison", |
|
"sec_num": "6.6" |
|
}, |
|
{ |
|
"text": "We note that these results are not state-of-theart on this dataset (on this task, Moses/MERT achieves 0.6838 BLEU and 0.8523 METEOR with maximum phrase length 3). 14 Our aim has been to 13 In fact, the strictest \"synchronous\" model used the almost-forbidden configurations in 2% of test sentences; this behavior disappears as configurations are legalized.", |
|
"cite_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 188, |
|
"text": "13", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6.7" |
|
}, |
|
{ |
|
"text": "14 We believe one cause for this performance gap is the generation of the lattice and plan to address this in future work by allowing the phrase table to inform lattice generation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6.7" |
|
}, |
|
{ |
|
"text": "illustrate how a single model can provide a controlled experimental framework for comparisons of features, of inference methods, and of constraints. Our findings show that phrase features and dependency syntax produce complementary improvements to translation quality, that tree-totree configurations (a new feature in MT) are helpful for translation, and that substantial gains can be obtained by permitting certain types of nonisomorphism. We have validated cube summing and decoding as practical methods for approximate inference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6.7" |
|
}, |
|
{ |
|
"text": "Our framework permits exploration of alternative objectives, alternative approximate inference techniques, additional hidden variables (e.g., Moses' phrase segmentation variable), and, of course, additional feature representations. The system is publicly available at www.ark.cs. cmu.edu/Quipu.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6.7" |
|
}, |
|
{ |
|
"text": "We presented feature-rich MT using a principled probabilistic framework that separates features from inference. Our novel decoder is based on efficient DP-based QG lattice parsing extended to handle \"non-local\" features using generic techniques that also support efficient parameter estimation. Controlled experiments permitted with this system show interesting trends in the use of syntactic features and constraints.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Informally, features are \"parts\" of a parallel sentence pair and/or their mutual derivation structure (trees, alignments, etc.). Features are often implied by a choice of formalism.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To date, QG has been used for word alignment(Smith and Eisner, 2006), adaptation and projection in parsing(Smith and Eisner, 2009), and various monolingual recognition and scoring tasks(Wang et al., 2007;Das and Smith, 2009); this paper represents its first application to MT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "I.e., from here on, a : {1, . . . , m} \u2192 {0, . . . , n} where 0 denotes alignment to NULL.7 Arguably, we seek argmax t p(t | s), marginalizing out everything else. Approximate solutions have been proposed for that problem in several settingsSun and Tsujii, 2009); we leave their combination with our approach to future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A full discussion is omitted for space, but in fact we use \"cube decoding,\" a slightly less approximate, slightly more expensive method that is more closely related to the approximate inference methods we use for training, discussed in \u00a75.9 In practice, we regularize by including a term \u2212c \u03b8 2 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We made this choice both for similarity to standard MT systems and a more rapid experiment cycle.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank three anonymous EMNLP reviewers, David Smith, and Stephan Vogel for helpful comments and feedback that improved this paper. This research was supported by NSF IIS-0836431 and IIS-0844507, a grant from Google, and computational resources provided by Yahoo.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Learning dependency translation modles as colections of finite-state head transducers", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Alshawi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Bangalore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Douglas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computational Linguistics", |
|
"volume": "26", |
|
"issue": "1", |
|
"pages": "45--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Alshawi, S. Bangalore, and S. Douglas. 2000. Learning dependency translation modles as colec- tions of finite-state head transducers. Computa- tional Linguistics, 26(1):45-60.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Banerjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Banerjee and A. Lavie. 2005. METEOR: An au- tomatic metric for MT evaluation with improved correlation with human judgments. In Proc. of ACL Workshop on Intrinsic and Extrinsic Evalua- tion Measures for MT and/or Summarization.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Statistical analysis of non-lattice data", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Besag", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "The Statistician", |
|
"volume": "24", |
|
"issue": "", |
|
"pages": "179--195", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. E. Besag. 1975. Statistical analysis of non-lattice data. The Statistician, 24:179-195.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Probabilistic inference for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proc. of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Blunsom and M. Osborne. 2008. Probabilistic infer- ence for machine translation. In Proc. of EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A discriminative latent variable model for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Blunsom, T. Cohn, and M. Osborne. 2008. A dis- criminative latent variable model for statistical ma- chine translation. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The mathematics of statistical machine translation: Parameter estimation", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Compu- tational Linguistics, 19(2):263-311.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "An empirical study of smoothing techniques for language modeling", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Chen and J. Goodman. 1998. An empirical study of smoothing techniques for language modeling. Tech- nical report 10-98, Harvard University.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Online large-margin training of syntactic and structural translation features", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Marton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proc. of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Chiang, Y. Marton, and P. Resnik. 2008. On- line large-margin training of syntactic and structural translation features. In Proc. of EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A hierarchical phrase-based model for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Hierarchical phrase-based translation", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "33", |
|
"issue": "2", |
|
"pages": "201--228", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, 33(2):201-228.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Head-Driven Statistical Models for Natural Language Parsing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, U. Penn.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Paraphrase identification as probabilistic quasi-synchronous recognition", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proc. of ACL-IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Das and N. A. Smith. 2009. Paraphrase identifica- tion as probabilistic quasi-synchronous recognition. In Proc. of ACL-IJCNLP.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Machine translation using probabilistic synchronous dependency insertion grammar", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Ding and M. Palmer. 2005. Machine translation us- ing probabilistic synchronous dependency insertion grammar. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Compiling Comp Ling: Practical weighted dynamic programming and the Dyna language", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Goldlust", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of HLT-EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Eisner, E. Goldlust, and N. A. Smith. 2005. Com- piling Comp Ling: Practical weighted dynamic pro- gramming and the Dyna language. In Proc. of HLT- EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Bilexical grammars and a cubic-time probabilistic parser", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proc. of IWPT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Eisner. 1997. Bilexical grammars and a cubic-time probabilistic parser. In Proc. of IWPT.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Scalable inference and training of context-rich syntactic translation models", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Graehl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Deneefe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Thayer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of COLING-ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Galley, J. Graehl, K. Knight, D. Marcu, S. DeNeefe, W. Wang, and I. Thayer. 2006. Scalable infer- ence and training of context-rich syntactic transla- tion models. In Proc. of COLING-ACL.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Rich sourceside context for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proc. of ACL-2008 Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Gimpel and N. A. Smith. 2008. Rich source- side context for statistical machine translation. In Proc. of ACL-2008 Workshop on Statistical Machine Translation.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Cube summing, approximate inference with non-local features, and dynamic programming without semirings", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proc. of EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Gimpel and N. A. Smith. 2009. Cube summing, approximate inference with non-local features, and dynamic programming without semirings. In Proc. of EACL.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Using supertags as source language context in SMT", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Haque", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Naskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proc. of EAMT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Haque, S. K. Naskar, Y. Ma, and A. Way. 2009. Using supertags as source language context in SMT. In Proc. of EAMT.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Forest rescoring: Faster decoding with integrated language models", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Huang and D. Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Direct translation model 2", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ittycheriah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proc. of HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Ittycheriah and S. Roukos. 2007. Direct translation model 2. In Proc. of HLT-NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Fast exact inference with a factored model for natural language parsing", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Advances in NIPS 15", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Klein and C. D. Manning. 2003. Fast exact in- ference with a factored model for natural language parsing. In Advances in NIPS 15.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Corpus-based induction of syntactic structure: Models of dependency and constituency", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Klein and C. D. Manning. 2004. Corpus-based induction of syntactic structure: Models of depen- dency and constituency. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Statistical phrase-based translation", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. of HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proc. of HLT-NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Moses: Open source toolkit for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Moran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Zens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Constantin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Herbst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL (demo session).", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Hidden-variable models for discriminative reranking", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Koo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Koo and M. Collins. 2005. Hidden-variable models for discriminative reranking. In Proc. of EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "An end-to-end discriminative approach to machine translation", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Bouchard-C\u00f4t\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of COLING-ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Liang, A. Bouchard-C\u00f4t\u00e9, D. Klein, and B. Taskar. 2006. An end-to-end discriminative approach to ma- chine translation. In Proc. of COLING-ACL.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Translation as weighted deduction", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proc. of EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Lopez. 2009. Translation as weighted deduction. In Proc. of EACL.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Statistical machine translation with syntactified target language phrases", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Echihabi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Marcu, W. Wang, A. Echihabi, and K. Knight. 2006. Statistical machine translation with syntactified tar- get language phrases. In Proc. of EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Forest-based translation", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Mi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Mi, L. Huang, and Q. Liu. 2008. Forest-based translation. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Discriminative training and maximum entropy models for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. J. Och and H. Ney. 2002. Discriminative train- ing and maximum entropy models for statistical ma- chine translation. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "A systematic comparison of various statistical alignment models", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computa- tional Linguistics, 29(1).", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Minimum error rate training for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. J. Och. 2003. Minimum error rate training for sta- tistical machine translation. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Featurebased language understanding", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "EUROSPEECH", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Papineni, S. Roukos, and T. Ward. 1997. Feature- based language understanding. In EUROSPEECH.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "BLEU: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W.J. Zhu. 2001. BLEU: a method for automatic evaluation of ma- chine translation. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Inside-outside reestimation from partially bracketed corpora", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"C N" |
|
], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Schabes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. C. N. Pereira and Y. Schabes. 1992. Inside-outside reestimation from partially bracketed corpora. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Dependency treelet translation: Syntactically informed phrasal SMT", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Menezes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Quirk, A. Menezes, and C. Cherry. 2005. De- pendency treelet translation: Syntactically informed phrasal SMT. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A new string-to-dependency machine translation algorithm with a target dependency language model", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Shen, J. Xu, and R. Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Quasi-synchronous grammars: Alignment by soft projection of syntactic dependencies", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of HLT-NAACL Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. A. Smith and J. Eisner. 2006. Quasi-synchronous grammars: Alignment by soft projection of syntactic dependencies. In Proc. of HLT-NAACL Workshop on Statistical Machine Translation.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Parser adaptation and projection with quasi-synchronous features", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proc. of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. A. Smith and J. Eisner. 2009. Parser adaptation and projection with quasi-synchronous features. In Proc. of EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "SRILM-an extensible language modeling toolkit", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proc. of ICSLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Stolcke. 2002. SRILM-an extensible language modeling toolkit. In Proc. of ICSLP.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Sequential labeling with latent variables: An exact inference algorithm and its efficient approximation", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proc. of EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "X. Sun and J. Tsujii. 2009. Sequential labeling with latent variables: An exact inference algorithm and its efficient approximation. In Proc. of EACL.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "What is the Jeopardy model? a quasi-synchronous grammar for QA", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mitamura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proc. of EMNLP-CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Wang, N. A. Smith, and T. Mitamura. 2007. What is the Jeopardy model? a quasi-synchronous gram- mar for QA. In Proc. of EMNLP-CoNLL.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "A syntax-based statistical translation model", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Yamada and K. Knight. 2001. A syntax-based sta- tistical translation model. In Proc. of ACL.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Comparison of size of k-best list for cube decoding with various feature sets.", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Key notation. Feature factorings are elaborated in Tab. 2." |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">Source:</td><td colspan=\"2\">$ konnten sie es \u00fcbersetzen ?</td><td/><td/></tr><tr><td colspan=\"2\">Reference:</td><td colspan=\"2\">could you translate it ?</td><td/><td/></tr><tr><td>$</td><td colspan=\"2\">konnten:could</td><td>konnten:could</td><td>es:it</td><td>?:?</td><td>?:?</td></tr><tr><td/><td colspan=\"2\">sie:you</td><td>sie:you</td><td>konnten:could</td><td>\u00fcbersetzen: translate</td><td>\u00fcbersetzen: translate</td></tr><tr><td/><td colspan=\"2\">konnten:couldn</td><td>es:it</td><td>sie:you</td><td>\u00fcbersetzen: translated</td><td>\u00fcbersetzen: translated</td></tr><tr><td/><td colspan=\"2\">konnten:might</td><td>sie:let</td><td>?:?</td><td>es:it</td><td>es:it</td></tr><tr><td/><td colspan=\"2\">es:it</td><td>sie:them</td><td>\u00fcbersetzen: translate</td><td>konnten:could</td><td>NULL:to</td></tr><tr><td/><td>...</td><td/><td>...</td><td>...</td><td>...</td><td>...</td></tr></table>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "The output of the decoder consists of lattice arcs" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Eq. 9: Log-likelihood. Eq. 10: Pseudolikelihood. In both cases we maximize w.r.t. \u03b8. Eqs. 11-13: Recursive DP equations for summing over t and a." |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Feature set comparison (BLEU).tions involving root words and NULL-alignments more finely. There are 14 features in this category.Coverage features g cov are as described in \u00a74.2. In all, 46 feature weights are learned." |
|
} |
|
} |
|
} |
|
} |