Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C08-1041",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:25:52.713970Z"
},
"title": "Improving Statistical Machine Translation using Lexicalized Rule Selection",
"authors": [
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes a novel lexicalized approach for rule selection for syntax-based statistical machine translation (SMT). We build maximum entropy (MaxEnt) models which combine rich context information for selecting translation rules during decoding. We successfully integrate the MaxEnt-based rule selection models into the state-of-the-art syntax-based SMT model. Experiments show that our lexicalized approach for rule selection achieves statistically significant improvements over the state-of-the-art SMT system.",
"pdf_parse": {
"paper_id": "C08-1041",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes a novel lexicalized approach for rule selection for syntax-based statistical machine translation (SMT). We build maximum entropy (MaxEnt) models which combine rich context information for selecting translation rules during decoding. We successfully integrate the MaxEnt-based rule selection models into the state-of-the-art syntax-based SMT model. Experiments show that our lexicalized approach for rule selection achieves statistically significant improvements over the state-of-the-art SMT system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The syntax-based statistical machine translation (SMT) models (Chiang, 2005; Liu et al., 2006; Galley et al., 2006; Huang et al., 2006) use rules with hierarchical structures as translation knowledge, which can capture long-distance reorderings. Generally, a translation rule consists of a left-handside (LHS) 1 and a right-hand-side (RHS). The LHS and RHS can be words, phrases, or even syntactic trees, depending on SMT models. Translation rules can be learned automatically from parallel corpus. Usually, an LHS may correspond to multiple RHS's in multiple rules. Therefore, in statistical machine translation, the rule selection task is to select the correct RHS for an LHS during decoding.",
"cite_spans": [
{
"start": 62,
"end": 76,
"text": "(Chiang, 2005;",
"ref_id": "BIBREF4"
},
{
"start": 77,
"end": 94,
"text": "Liu et al., 2006;",
"ref_id": "BIBREF12"
},
{
"start": 95,
"end": 115,
"text": "Galley et al., 2006;",
"ref_id": "BIBREF6"
},
{
"start": 116,
"end": 135,
"text": "Huang et al., 2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The conventional approach for rule selection is to use precomputed translation probabilities which are estimated from the training corpus, as well as a n-gram language model which is trained on the target language. The limitation of this method is that it ignores context information (especially on the source-side) during decoding. Take the hierarchical model (Chiang, 2005) as an example. Consider the following rules for Chinese-to-English translation 2 :",
"cite_spans": [
{
"start": 361,
"end": 375,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1) X \u2192 X 1 X 2 , X 2 in X 1 (2) X \u2192 X 1 X 2 , at X 1 's X 2 (3) X \u2192 X 1 X 2 , with X 2 of X 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These rules have the same source-side, and all of them can pattern-match all the following source phrases: Given a source phrase, how does the decoder know which rule is suitable? In fact, rule (1) and rule (2) have different syntactic structures (the left two trees of Figure 1 ). Thus rule (1) can be used for translating noun phrase (a), and rule (2) can be applied to prepositional phrase (b). The weakness of Chiang's hierarchical model is that it cannot distinguish different structures on the source-side. The linguistically syntax-based models (Liu et al., 2006; Huang et al., 2006) can distinguish syntactic structures by parsing source sentence. However, as an LHS tree may correspond to different RHS strings in different rules (the right two rules of Figure 1) , these models also face the rule selection problem during decoding.",
"cite_spans": [
{
"start": 552,
"end": 570,
"text": "(Liu et al., 2006;",
"ref_id": "BIBREF12"
},
{
"start": 571,
"end": 590,
"text": "Huang et al., 2006)",
"ref_id": "BIBREF8"
},
{
"start": 763,
"end": 772,
"text": "Figure 1)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 270,
"end": 278,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "NP DNP PP X 1 X 2 X 2 of X 1 PP LCP NP X 1 X 2 at X 1 's X 2 PP LCP NP X 1 X 2 with X 2 of X 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a lexicalized approach for rule selection for syntax-based statistical machine translation. We use the maximum entropy approach to combine various context features, e.g., context words of rules, boundary words of phrases, parts-of-speech (POS) information. Therefore, the decoder can use rich context information to perform context-dependent rule selection. We build a maximum entropy based rule selection (MaxEnt RS) model for each ambiguous hierarchical LHS, the LHS which contains nonterminals and corresponds to multiple RHS's in multiple rules. We integrate the MaxEnt RS models into the state-ofthe-art hierarchical SMT system (Chiang, 2005) . Experiments show that the lexicalized rule selection approach improves translation quality of the state-of-the-art SMT system, and the improvements are statistically significant.",
"cite_spans": [
{
"start": 659,
"end": 673,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Statistical machine translation systems usually face the selection problem because of the one-tomany correspondence between the source and target language. Recent researches showed that rich context information can help SMT systems perform selection and improves translation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Selection Problem in SMT",
"sec_num": "2.1"
},
{
"text": "The discriminative phrasal reordering models (Xiong et al., 2006; Zens and Ney, 2006) provided a lexicalized method for phrase reordering.",
"cite_spans": [
{
"start": 45,
"end": 65,
"text": "(Xiong et al., 2006;",
"ref_id": "BIBREF18"
},
{
"start": 66,
"end": 85,
"text": "Zens and Ney, 2006)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Selection Problem in SMT",
"sec_num": "2.1"
},
{
"text": "In these models, LHS and RHS can be considered as phrases and reordering types, respectively. Therefore the selection task is to select a reordering type for phrases. They use a MaxEnt model to combine context features and distinguished two kinds of reorderings between two adjacent phrases: monotone or swap. However, our method is more generic, we perform lexicalized rule selection for syntax-based SMT models. In these models, the rules with hierarchical structures can handle reorderings of non-adjacent phrases. Furthermore, the rule selection can be considered as a multiclass classification task, while the phrase reordering between two adjacent phrases is a two-class classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Selection Problem in SMT",
"sec_num": "2.1"
},
{
"text": "Recently, word sense disambiguation (WSD) techniques improved the performance of SMT systems by helping the decoder perform lexical selection. Carpuat and Wu (2007b) integrated a WSD system into a phrase-based SMT system, Pharaoh (Koehn, 2004a) . Furthermore, they extended WSD to phrase sense disambiguation (PSD) (Carpuat and Wu, 2007a) . Either the WSD or PSD system combines rich context information to solve the ambiguity problem for words or phrases. Their experiments showed stable improvements of translation quality. These are different from our work. On one hand, they focus on solving the lexical ambiguity problem, and use a WSD or PSD system to predict translations for phrases which only consist of words. However, we put emphasis on rule selection, and predict translations for hierarchical LHS's which consist of both words and nonterminals. On the other hand, they incorporated a WSD or PSD system into a phrase-based SMT system with a weak distortion model for phrase reordering. While we incorporate MaxEnt RS models into the state-of-the-art syntax-based SMT system, which captures phrase reordering by using a hierarchical model. Chan et al. (2007) incorporated a WSD system into the hierarchical SMT system, Hiero (Chiang, 2005) , and reported statistically significant improvement. But they only focused on solving ambiguity for terminals of translation rules, and limited the length of terminals up to 2. Different from their work, we consider a translation rule as a whole, which contains both terminals and nonterminals. Moreover, they explored features for the WSD system only on the source-side. While we define context features for the MaxEnt RS models on both the source-side and target-side.",
"cite_spans": [
{
"start": 143,
"end": 165,
"text": "Carpuat and Wu (2007b)",
"ref_id": "BIBREF2"
},
{
"start": 230,
"end": 244,
"text": "(Koehn, 2004a)",
"ref_id": "BIBREF10"
},
{
"start": 315,
"end": 338,
"text": "(Carpuat and Wu, 2007a)",
"ref_id": "BIBREF1"
},
{
"start": 1151,
"end": 1169,
"text": "Chan et al. (2007)",
"ref_id": "BIBREF3"
},
{
"start": 1236,
"end": 1250,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Selection Problem in SMT",
"sec_num": "2.1"
},
{
"text": "The hierarchical model (Chiang, 2005; Chiang, 2007) is built on a weighted synchronous contextfree grammar (SCFG) . A SCFG rule has the following form:",
"cite_spans": [
{
"start": 23,
"end": 37,
"text": "(Chiang, 2005;",
"ref_id": "BIBREF4"
},
{
"start": 38,
"end": 51,
"text": "Chiang, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Hierarchical Model",
"sec_num": "2.2"
},
{
"text": "X \u2192 \u03b1, \u03b3, \u223c (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Hierarchical Model",
"sec_num": "2.2"
},
{
"text": "where X is a nonterminal, \u03b1 is an LHS string consists of terminals and nonterminals, \u03b3 is the translation of \u03b1, \u223c defines a one-one correspondence between nonterminals in \u03b1 and \u03b3. For example,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Hierarchical Model",
"sec_num": "2.2"
},
{
"text": "(5) X \u2192 , economic development (6) X \u2192 X 1 X 2 the X 2 of X 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Hierarchical Model",
"sec_num": "2.2"
},
{
"text": "Rule (5) contains only terminals, which is similar to phrase-to-phrase translation in phrase-based SMT models. Rule (6) contains both terminals and nonterminals, which causes a reordering of phrases. The hierarchical model uses the maximum likelihood method to estimate translation probabilities for a phrase pair \u03b1, \u03b3 , independent of any other context information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Hierarchical Model",
"sec_num": "2.2"
},
{
"text": "To perform translation, Chiang uses a log-linear model (Och and Ney, 2002) to combine various features. The weight of a derivation D is computed by:",
"cite_spans": [
{
"start": 55,
"end": 74,
"text": "(Och and Ney, 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Hierarchical Model",
"sec_num": "2.2"
},
{
"text": "w(D) = i \u03c6 i (D) \u03bb i (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Hierarchical Model",
"sec_num": "2.2"
},
{
"text": "where \u03c6 i (D) is a feature function and \u03bb i is the feature weight of \u03c6 i (D). During decoding, the decoder searches the best derivation with the lowest cost by applying SCFG rules. However, the rule selections are independent of context information, except the left neighboring n \u2212 1 target words for computing n-gram language model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Hierarchical Model",
"sec_num": "2.2"
},
{
"text": "The rule selection task can be considered as a multi-class classification task. For a source-side, each corresponding target-side is a label. The maximum entropy approach (Berger et al., 1996) is known to be well suited to solve the classification problem. Therefore, we build a maximum entropy based rule selection (MaxEnt RS) model for each ambiguous hierarchical LHS. In this section, we will describe how to build the MaxEnt RS models and how to integrate them into the hierarchical SMT model.",
"cite_spans": [
{
"start": 171,
"end": 192,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalized Rule Selection",
"sec_num": "3"
},
{
"text": "Following (Chiang, 2005) , we use \u03b1, \u03b3 to represent a SCFG rule extracted from the training corpus, where \u03b1 and \u03b3 are source and target strings, respectively. The nonterminals in \u03b1 and \u03b3 are represented by X k , where k is an index indicating one-one correspondence between nonterminals in source and target sides. Let us use f (X k ) to represent the source text covered by X k , and e(X k ) to represent the translation of f (X k ). Let C(\u03b1) be the context information of source text matched by \u03b1, and C(\u03b3) be the context information of target text matched by \u03b3. Under the MaxEnt model, we have:",
"cite_spans": [
{
"start": 10,
"end": 24,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The MaxEnt RS Model",
"sec_num": "3.1"
},
{
"text": "P rs (\u03b3|\u03b1, f (X k ), e(X k )) = (8) exp[ i \u03bb i h i (C(\u03b3), C(\u03b1), f(X k ), e(X k ))] \u03b3 exp[ i \u03bb i h i (C(\u03b3 ), C(\u03b1), f(X k ), e(X k ))]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MaxEnt RS Model",
"sec_num": "3.1"
},
{
"text": "where h i is a binary feature function, \u03bb i is the feature weight of h i . The MaxEnt RS model combines rich context information of grammar rules, as well as information of the subphrases which will be reduced to nonterminal X during decoding. However, these information is ignored by Chiang's hierarchical model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MaxEnt RS Model",
"sec_num": "3.1"
},
{
"text": "We design three kinds of features for a rule \u03b1, \u03b3 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MaxEnt RS Model",
"sec_num": "3.1"
},
{
"text": "\u2022 Lexical features, which are the words immediately to the left and right of \u03b1, and boundary words of subphrase f (X k ) and e(X k );",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MaxEnt RS Model",
"sec_num": "3.1"
},
{
"text": "\u2022 Parts-of-speech (POS) features, which are POS tags of the source words defined in lexical features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MaxEnt RS Model",
"sec_num": "3.1"
},
{
"text": "\u2022 Length features, which are the length of subphrases f (X k ) and e(X k ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MaxEnt RS Model",
"sec_num": "3.1"
},
{
"text": "The source word immediately to the left of \u03b1 W\u03b1 +1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Name Description W\u03b1 \u22121",
"sec_num": null
},
{
"text": "The source word immediately to the right of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Name Description W\u03b1 \u22121",
"sec_num": null
},
{
"text": "\u03b1 WL f (X k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Name Description W\u03b1 \u22121",
"sec_num": null
},
{
"text": "The first word of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Name Description W\u03b1 \u22121",
"sec_num": null
},
{
"text": "f (X k ) Lexical Features WR f (X k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Name Description W\u03b1 \u22121",
"sec_num": null
},
{
"text": "The last word of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Name Description W\u03b1 \u22121",
"sec_num": null
},
{
"text": "f (X k ) P\u03b1 \u22121 POS of W\u03b1 \u22121 P\u03b1 +1 POS of W\u03b1 +1 PL f (X k ) POS of WL f (X k ) POS Features PR f (X k ) POS of WR f (X k ) Source-side Length Feature LEN f (X k ) Length of source subphrase f (X k ) WL e(X k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Name Description W\u03b1 \u22121",
"sec_num": null
},
{
"text": "The first word of e(X k ) Lexical Features WR e(X k ) The last word of e(X k ) Target-side Length Feature LEN e(X k ) Length of target subphrase e(X k ) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Name Description W\u03b1 \u22121",
"sec_num": null
},
{
"text": "= W\u03b1 +1 = Lexical Features WL f (X 1 ) = WR f (X 1 ) = WL f (X 2 ) = WR f (X 1 ) = WL e(X 1 ) =economic WR e(X 1 ) =field WL e(X 2 ) =cooperation WR f (X 1 ) =cooperation P\u03b1 \u22121 =v W\u03b1 +1 =wj POS Features PL f (X 1 ) =n PR f (X 1 ) =n PL f (X 2 ) =vn PR f (X 2 ) =vn Length Feature LEN f (X 1 ) =2 LEN f (X 2 ) =1 LEN e(X 1 ) =2 LEN e(X 2 ) =1 Table 2: Features of rule X \u2192 X 1 X 2 , X 2 in the X 1 . /v /p /n /n /ude /vn /wj",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Name Description W\u03b1 \u22121",
"sec_num": null
},
{
"text": "strengthen the cooperation in the economic field . Table 1 shows these features in detail. These features can be easily gathered according to Chinag's rule extraction method (Chiang, 2005) . We use an example for illustration. Figure 2 is a word-aligned training example with POS tags on the source side. We can obtain a SCFG rule:",
"cite_spans": [
{
"start": 174,
"end": 188,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 227,
"end": 233,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Type Name Description W\u03b1 \u22121",
"sec_num": null
},
{
"text": "(9) X \u2192 X 1 X 2 , X 2 in the X 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Name Description W\u03b1 \u22121",
"sec_num": null
},
{
"text": "Where the source phrases covered by X 1 and X 2 are \" \" and \" \", respectively. Table 2 shows features of this rule. Note that following (Chiang, 2005) , we limit the number of nonterminals of a rule up to 2. Thus a rule may have 20 features at most.",
"cite_spans": [
{
"start": 137,
"end": 151,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 79,
"end": 87,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Type Name Description W\u03b1 \u22121",
"sec_num": null
},
{
"text": "After extracting features from the training corpus, we use the toolkit implemented by Zhang (2004) to train a MaxEnt RS model for each ambiguous hierarchical LHS. We set iteration number to 100 and Gaussian prior to 1.",
"cite_spans": [
{
"start": 86,
"end": 98,
"text": "Zhang (2004)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Type Name Description W\u03b1 \u22121",
"sec_num": null
},
{
"text": "We integrate the MaxEnt RS models into the SMT model during the translation of each source sentence. Thus the MaxEnt RS models can help the decoder perform context-dependent rule selection during decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating the MaxEnt RS Models into the SMT Model",
"sec_num": "3.2"
},
{
"text": "In (Chiang, 2005) , the log-linear model combines 8 features: the translation probabilities P (\u03b3|\u03b1) and P (\u03b1|\u03b3), the lexical weights P w (\u03b3|\u03b1) and P w (\u03b1|\u03b3), the language model, the word penalty, the phrase penalty, and the glue rule penalty. For integration, we add two new features:",
"cite_spans": [
{
"start": 3,
"end": 17,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating the MaxEnt RS Models into the SMT Model",
"sec_num": "3.2"
},
{
"text": "\u2022 P rs (\u03b3|\u03b1, f (X k ), e(X k )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating the MaxEnt RS Models into the SMT Model",
"sec_num": "3.2"
},
{
"text": "This feature is computed by the MaxEnt RS model, which gives a probability that the model selecting a target-side \u03b3 given an ambiguous source-side \u03b1, considering context information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating the MaxEnt RS Models into the SMT Model",
"sec_num": "3.2"
},
{
"text": "\u2022 P rsn = exp(1). This feature is similar to phrase penalty feature. In our experiments, we find that some source-sides are not ambiguous, and correspond to only one targetside. However, if a source-side \u03b1 is not ambiguous, the first feature P rs will be set to 1.0. In fact, these rules are not reliable since they usually occur only once in the training corpus. Therefore, we use this feature to reward the ambiguous source-side. During decoding, if an LHS has multiple translations, this feature is set to exp(1), otherwise it is set to exp(0).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating the MaxEnt RS Models into the SMT Model",
"sec_num": "3.2"
},
{
"text": "The advantage of our integration is that we need not change the main decoding algorithm of a SMT system. Furthermore, the weights of the new features can be trained together with other features of the translation model. Chiang (2007) uses the CKY algorithm with a cube pruning method for decoding. This method can significantly reduce the search space by efficiently computing the top-n items rather than all possible items at a node, using the k-best Algorithms of Huang and Chiang (2005) to speed up the computation. In cube pruning, the translation model is treated as the monotonic backbone of the search space, while the language model score is a non-monotonic cost that distorts the search space (see (Huang and Chiang, 2005) for definition of monotonicity). Similarly, in the MaxEnt RS model, source-side features form a monotonic score while target-side features constitute a nonmonotonic cost that can be seen as part of the language model.",
"cite_spans": [
{
"start": 220,
"end": 233,
"text": "Chiang (2007)",
"ref_id": "BIBREF5"
},
{
"start": 466,
"end": 489,
"text": "Huang and Chiang (2005)",
"ref_id": "BIBREF7"
},
{
"start": 707,
"end": 731,
"text": "(Huang and Chiang, 2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating the MaxEnt RS Models into the SMT Model",
"sec_num": "3.2"
},
{
"text": "For translating a source sentence F J I , the decoder adopts a bottom-up strategy. All derivations are stored in a chart structure. Each cell c[i, j] of the chart contains all partial derivations which correspond to the source phrase f j i . For translating a source-side span [i, j], we first select all possible rules from the rule table. Meanwhile, we can obtain features of the MaxEnt RS models which are defined on the source-side since they are fixed before decoding. During decoding, for a source phrase f j i , suppose the rule",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating the MaxEnt RS Models into the SMT Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X \u2192 f k i X 1 f j t , e k i X 1 e j t",
"eq_num": "(10)"
}
],
"section": "Integrating the MaxEnt RS Models into the SMT Model",
"sec_num": "3.2"
},
{
"text": "is selected by the decoder, where i \u2264 k < t \u2264 j and k + 1 < t, then we can gather features which are defined on the target-side of the subphrase X 1 from the ancestor chart cell c[k + 1, t \u2212 1] since the span [k + 1, t \u2212 1] has already been covered. Then the new feature scores P rs and P rsn can be computed. Therefore, the cost of the derivation can be obtained. Finally, the decoding is completed when the whole sentence is covered, and the best derivation of the source sentence F J I is the item with the lowest cost in cell c [I, J] .",
"cite_spans": [
{
"start": 532,
"end": 538,
"text": "[I, J]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating the MaxEnt RS Models into the SMT Model",
"sec_num": "3.2"
},
{
"text": "We carry out experiments on two translation tasks with different sizes and domains of the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "4.1"
},
{
"text": "\u2022 IWSLT-05: We use about 40,000 sentence pairs from the BTEC corpus with 354k Chinese words and 378k English words as our training data. The English part is used to train a trigram language model. We use IWSLT-04 test set as the development set and IWSLT-05 test set as the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "4.1"
},
{
"text": "\u2022 NIST-03: We use the FBIS corpus as the training corpus, which contains 239k sentence pairs with 6.9M Chinese words and 8.9M English words. For this task, we train two trigram language models on the English part of the training corpus and the Xinhua portion of the Gigaword corpus, respectively. NIST-02 test set is used as the development set and NIST-03 test set is used as the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "4.1"
},
{
"text": "To train the translation model, we first run GIZA++ (Och and Ney, 2000) to obtain word alignment in both translation directions. Then the word alignment is refined by performing \"growdiag-final\" method (Koehn et al., 2003) . We use the same method suggested in (Chiang, 2005) to extract SCFG grammar rules. Meanwhile, we gather context features for training the MaxEnt RS models. The maximum initial phrase length is set to 10 and the maximum rule length of the sourceside is set to 5. We use SRI Language Modeling Toolkit (Stolcke, 2002) to train language models for both tasks. We use minimum error rate training to tune the feature weights for the log-linear model.",
"cite_spans": [
{
"start": 52,
"end": 71,
"text": "(Och and Ney, 2000)",
"ref_id": "BIBREF13"
},
{
"start": 202,
"end": 222,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF9"
},
{
"start": 261,
"end": 275,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "The translation quality is evaluated by BLEU metric (Papineni et al., 2002) , as calculated by mteval-v11b.pl with case-insensitive matching of n-grams, where n = 4.",
"cite_spans": [
{
"start": 52,
"end": 75,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "We reimplement the decoder of Hiero (Chiang, 2007) in C++, which is the state-of-the-art SMT Table 3 : BLEU-4 scores (case-insensitive) on IWSLT-05 task and NIST MT-03 task. SLex = Source-side Lexical Features, PF = POS Features, SLen = Source-side Length Feature, TF = Target-side features.",
"cite_spans": [
{
"start": 36,
"end": 50,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "4.3"
},
{
"text": "system. During decoding, we set b = 100 to prune grammar rules, \u03b2 = 10, b = 30 to prune X cells, and \u03b2 = 10, b = 15 to prune S cells. For cube pruning, we set the threshold = 1.0. See (Chiang, 2007) for meanings of these pruning parameters.",
"cite_spans": [
{
"start": 184,
"end": 198,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "4.3"
},
{
"text": "The baseline system uses precomputed phrase translation probabilities and two trigram language models to perform rule selection, independent of any other context information. The results are shown in the row Baseline of Table 3 . For IWSLT-05 task, the baseline system achieves a BLEU-4 score of 56.20. For NIST MT-03 task, the BLEU-4 score is 28.05 .",
"cite_spans": [],
"ref_spans": [
{
"start": 220,
"end": 227,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "4.3"
},
{
"text": "As described in Section 3.2, we add two new features to integrate the MaxEnt RS models into the hierarchical model. To run the decoder, we share the same pruning settings with the baseline system. Table 3 shows the results.",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline + MaxEnt RS",
"sec_num": "4.4"
},
{
"text": "Using all features defined in Section 3.1 to train the MaxEnt RS models, for IWSLT-05 task, the BLEU-4 score is 57.20, which achieves an absolute improvement of 1.0 over the baseline. For NIST-03 task, our system obtains a BLEU-4 score of 29.02, with an absolute improvement of 0.97 over the baseline. Using Zhang's significance tester (Zhang et al., 2004) to perform paired bootstrap sampling (Koehn, 2004b) , both improvements on the two tasks are statistically significant at p < 0.05.",
"cite_spans": [
{
"start": 336,
"end": 356,
"text": "(Zhang et al., 2004)",
"ref_id": "BIBREF20"
},
{
"start": 394,
"end": 408,
"text": "(Koehn, 2004b)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline + MaxEnt RS",
"sec_num": "4.4"
},
{
"text": "In order to explore the utility of the context features, we train the MaxEnt RS models on different feature sets. We find that POS features are the most useful features since they can generalize over all training examples. Moreover, length feature also yields improvement. However, these features are never used in the baseline. Table 4 shows the number of source-sides of the SCFG rules for NIST-03 task. After extracting grammar rules from the training corpus, there are 163,097 source-sides match the test corpus, 91.15% are hierarchical LHS's (H-LHS, the LHS which contains nonterminals). For the hierarchical LHS's, 64.18% are ambiguous (AH-LHS, the H-LHS which has multiple translations). This indicates that the decoder will face serious rule selection problem during decoding. We also note the number of the source-sides of the best translation for the test corpus. For the baseline system, the number of H-LHS only account for 59.36% of total LHS's. However, by incorporating MaxEnt RS models, that proportion increases to 81.44%, since the number of AH-LHS increases. The reason is that, we use the feature P rsn to reward ambiguous hierarchical LHS's. This has some advantages. On one hand, H-LHS can capture phrase reorderings. On the other hand, AH-LHS is more reliable than non-ambiguous LHS, since most non-ambiguous LHS's occur only once in the training corpus. In order to know how the MaxEnt RS models improve the performance of the SMT system, we study the best translation of Baseline and Base-line+MaxEnt RS. We find that the MaxEnt RS models improve translation quality in 2 ways.",
"cite_spans": [],
"ref_spans": [
{
"start": 329,
"end": 336,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Baseline + MaxEnt RS",
"sec_num": "4.4"
},
{
"text": "Since the SCFG rules which contain nonterminals can capture reordering of phrases, better rule selection will produce better phrase reordering. For example, the source sentence \".. The source sentence is translated incorrectly by the baseline system, which selects the rule",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Phrase reordering",
"sec_num": "5.1"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Phrase reordering",
"sec_num": "5.1"
},
{
"text": "(11) X \u2192 X 1 X 2 , the X 1 X 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Phrase reordering",
"sec_num": "5.1"
},
{
"text": "and produces a monotone translation. In contrast, by considering information of the subphrases X 1 and X 2 , the MaxEnt RS model chooses the rule",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Phrase reordering",
"sec_num": "5.1"
},
{
"text": "(12) X \u2192 X 1 X 2 , X 2 of X 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Phrase reordering",
"sec_num": "5.1"
},
{
"text": "and obtains a correct translation by swapping X 1 and X 2 on the target-side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Phrase reordering",
"sec_num": "5.1"
},
{
"text": "The MaxEnt RS models can also help the decoder perform better lexical translation than the baseline. This is because the SCFG rules contain terminals. When the decoder selects a rule for a source-side, it also determines the translations of the source terminals. For example, the translations of the source sentence \" \" are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "\u2022 Reference I'm afraid this flight is full.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "\u2022 Baseline: I'm afraid already booked for this flight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "\u2022 +MaxEnt RS: I'm afraid this flight is full.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "Here, the baseline translates the Chinese phrase \" \" into \"booked\" by using the rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "(13) X \u2192 X 1 , X 1 booked",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "The meaning is not fully expressed since the Chinese word \" \" is not translated. However, the MaxEnt RS model obtains a correct translation by using the rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "(14) X \u2192 X 1 , X 1 full",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "However, we also find that some results produced by the MaxEnt RS models seem to decrease the BLEU score. An interesting example is the translation of the source sentence \" \":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "\u2022 Reference1: What is the name of this street?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "\u2022 Reference2: What is this street called?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "\u2022 Baseline: What is the name of this street?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "\u2022 +MaxEnt RS: What's this street called?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "In fact, both translations are correct. But the translation of the baseline fully matches Reference1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "Although the translation produced by the MaxEnt RS model is almost the same as Reference2, as the BLEU metric is based on n-gram matching, the translation \"What's\" cannot match \"What is\" in Reference2. Therefore, the MaxEnt RS model achieves a lower BLEU score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Lexical Translation",
"sec_num": "5.2"
},
{
"text": "In this paper, we propose a generic lexicalized approach for rule selection. We build maximum entropy based rule selection models for each ambiguous hierarchical source-side of translation rules. The MaxEnt RS models combine rich context information, which can help the decoder perform context-dependent rule selection during decoding. We integrate the MaxEnt RS models into the hierarchical SMT model by adding two new features. Experiments show that the lexicalized approach for rule selection achieves statistically significant improvements over the state-of-the-art syntax-based SMT system. Furthermore, our approach not only can be used for the formally syntax-based SMT systems, but also can be applied to the linguistically syntaxbased SMT systems. For future work, we will explore more sophisticated features for the MaxEnt RS models and integrate the models into the linguistically syntax-based SMT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In this paper, we use Chinese and English as the source and target language, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to show our special thanks to Hwee Tou Ng, Liang Huang, Yajuan Lv and Yang Liu for their valuable suggestions. We also appreciate the anonymous reviewers for their detailed comments and recommendations. This work was supported by the National Natural Science Foundation of China (NO. 60573188 and 60736014), and the High Technology Research and Development Program of China (NO. 2006AA010108).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "",
"issue": "1",
"pages": "39--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berger, A. L., S. A. Della Pietra, and V. J. Della. 1996. A maximum entropy approach to natural lan- guage processing. Computational Linguistics, page 22(1):39 72.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "How phrase sense disambiguation outperforms word sense disambiguation for statistical machine translation",
"authors": [
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2007,
"venue": "11th Conference on Theoretical and Methodological Issues in Machine Translation",
"volume": "",
"issue": "",
"pages": "43--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carpuat, Marine and Dekai Wu. 2007a. How phrase sense disambiguation outperforms word sense dis- ambiguation for statistical machine translation. In 11th Conference on Theoretical and Methodological Issues in Machine Translation, pages 43-52.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Improving statistical machine translation using word sense disambiguation",
"authors": [
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "61--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carpuat, Marine and Dekai Wu. 2007b. Improving sta- tistical machine translation using word sense disam- biguation. In Proceedings of EMNLP-CoNLL 2007, pages 61-72.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Word sense disambiguation improves statistical machine translation",
"authors": [
{
"first": "Yee",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Seng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chan, Yee Seng, Hwee Tou Ng, and David Chiang. 2007. Word sense disambiguation improves sta- tistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Compu- tational Linguistics, pages 33-40.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiang, David. 2005. A hierarchical phrase-based model for statistical machine translation. In Pro- ceedings of the 43rd Annual Meeting of the Associa- tion for Computational Linguistics, pages 263-270.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiang, David. 2007. Hierarchical phrase-based trans- lation. Computational Linguistics, pages 33(2):201- 228.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Scalable inference and training of context-rich syntactic translation models",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Graehl",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Deneefe",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Thayer",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING-ACL 2006",
"volume": "",
"issue": "",
"pages": "961--968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Galley, Michel, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Pro- ceedings of COLING-ACL 2006, pages 961-968.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Better kbest parsing",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 9th International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, Liang and David Chiang. 2005. Better k- best parsing. In Proceedings of the 9th International Workshop on Parsing Technologies.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Statistical syntax-directed translation with extended domain of locality",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 7th Biennial Conference of the Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, Liang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proceedings of the 7th Bi- ennial Conference of the Association for Machine Translation in the Americas.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Philipp",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "127--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, Philipp, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL 2003, pages 127-133.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Pharaoh: a beam search decoder for phrase-based statistical machine translation models",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Sixth Conference of the Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "115--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, Philipp. 2004a. Pharaoh: a beam search de- coder for phrase-based statistical machine translation models. In Proceedings of the Sixth Conference of the Association for Machine Translation in the Amer- icas, pages 115-124.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, Philipp. 2004b. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 388-395.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Treeto-string alignment template for statistical machine translation",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "609--616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Yang, Qun Liu, and Shouxun Lin. 2006. Tree- to-string alignment template for statistical machine translation. In Proceedings of the 44th Annual Meet- ing of the Association for Computational Linguistics, pages 609-616.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, Franz Josef and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting of the Association for Compu- tational Linguistics, pages 440-447.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "295--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, Franz Josef and Hermann Ney. 2002. Discrimina- tive training and maximum entropy models for sta- tistical machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 295-302.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Josef",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, Franz Josef. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Com- putational Linguistics, pages 160-167.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W.-J",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Papineni, K., S. Roukos, T. Ward, and W.-J. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meet- ing of the Association for Computational Linguistics, pages 311-318.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Srilm -an extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Conference on Spoken language Processing",
"volume": "2",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, Andreas. 2002. Srilm -an extensible lan- guage modeling toolkit. In Proceedings of the Inter- national Conference on Spoken language Process- ing, volume 2, pages 901-904.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Maximum entropy based phrase reordering model for statistical machine translation",
"authors": [
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "521--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiong, Deyi, Qun Liu, and Shouxun Lin. 2006. Maxi- mum entropy based phrase reordering model for sta- tistical machine translation. In Proceedings of the 44th Annual Meeting of the Association for Compu- tational Linguistics, pages 521-528.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Discriminative reordering models for statistical machine translation",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "55--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zens, Richard and Hermann Ney. 2006. Discrimina- tive reordering models for statistical machine trans- lation. In Proceedings of the Workshop on Statistical Machine Translation, pages 55-63.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Interpreting bleu/nist scores: How much improvement do we need to have a better system?",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "2051--2054",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, Ying, Stephan Vogel, and Alex Waibel. 2004. Interpreting bleu/nist scores: How much improve- ment do we need to have a better system? In Pro- ceedings of the Fourth International Conference on Language Resources and Evaluation, pages 2051- 2054.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Maximum entropy modeling toolkit for python and c++",
"authors": [
{
"first": "Le",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, Le. 2004. Maximum entropy model- ing toolkit for python and c++. available at http://homepages.inf.ed.ac.uk/s0450736/maxent too- lkit.html.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Syntactic structures of the same source-side in different rules.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "An training example for rule extraction.",
"uris": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"text": "Feature categories of the MaxEnt RS model.",
"num": null,
"content": "<table><tr><td>Type</td><td>Feature</td></tr><tr><td>W\u03b1 \u22121</td><td/></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table><tr><td>: Number of possible source-sides of SCFG</td></tr><tr><td>rules for NIST-03 task and number of source-sides</td></tr><tr><td>of the best translation. H-LHS = Hierarchical</td></tr><tr><td>LHS, AH-LHS = Ambiguous hierarchical LHS.</td></tr></table>",
"html": null
}
}
}
}