Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N09-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:42:46.515886Z"
},
"title": "Preference Grammars: Softening Syntactic Constraints to Improve Statistical Machine Translation",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Andreas",
"middle": [],
"last": "Zollmann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a novel probabilistic synchoronous context-free grammar formalism for statistical machine translation, in which syntactic nonterminal labels are represented as \"soft\" preferences rather than as \"hard\" matching constraints. This formalism allows us to efficiently score unlabeled synchronous derivations without forgoing traditional syntactic constraints. Using this score as a feature in a log-linear model, we are able to approximate the selection of the most likely unlabeled derivation. This helps reduce fragmentation of probability across differently labeled derivations of the same translation. It also allows the importance of syntactic preferences to be learned alongside other features (e.g., the language model) and for particular labeling procedures. We show improvements in translation quality on small and medium sized Chinese-to-English translation tasks.",
"pdf_parse": {
"paper_id": "N09-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a novel probabilistic synchoronous context-free grammar formalism for statistical machine translation, in which syntactic nonterminal labels are represented as \"soft\" preferences rather than as \"hard\" matching constraints. This formalism allows us to efficiently score unlabeled synchronous derivations without forgoing traditional syntactic constraints. Using this score as a feature in a log-linear model, we are able to approximate the selection of the most likely unlabeled derivation. This helps reduce fragmentation of probability across differently labeled derivations of the same translation. It also allows the importance of syntactic preferences to be learned alongside other features (e.g., the language model) and for particular labeling procedures. We show improvements in translation quality on small and medium sized Chinese-to-English translation tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Probabilistic synchronous context-free grammars (PSCFGs) define weighted production rules that are automatically learned from parallel training data. As in classical CFGs, these rules make use of nonterminal symbols to generalize beyond lexical modeling of sentences. In MT, this permits translation and reordering to be conditioned on more abstract notions of context. For example, VP \u2192 ne VB 1 pas # do not VB 1 represents the discontiguous translation of the French words \"ne\" and \"pas\" to \"do not\", in the context of the labeled nonterminal symbol \"VB\" (representing syntactic category \"verb\"). Translation with PSCFGs is typically expressed as the problem of finding the maximum-weighted derivation consistent with the source sentence, where the scores are defined (at least in part) by R-valued weights associated with the rules. A PSCFG derivation is a synchronous parse tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Defining the translation function as finding the best derivation has the unfortunate side effect of forcing differently-derived versions of the same target sentence to compete with each other. In other words, the true score of each translation is \"fragmented\" across many derivations, so that each translation's most probable derivation is the only one that matters. The more Bayesian approach of finding the most probable translation (integrating out the derivations) instantiates an NP-hard inference problem even for simple word-based models (Knight, 1999) ; for grammar-based translation it is known as the consensus problem (Casacuberta and de la Higuera, 2000; Sima'an, 2002) .",
"cite_spans": [
{
"start": 545,
"end": 559,
"text": "(Knight, 1999)",
"ref_id": "BIBREF6"
},
{
"start": 629,
"end": 666,
"text": "(Casacuberta and de la Higuera, 2000;",
"ref_id": "BIBREF1"
},
{
"start": 667,
"end": 681,
"text": "Sima'an, 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With weights interpreted as probabilities, the maximum-weighted derivation is the maximum a posteriori (MAP) derivation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "e \u2190 argmax e max d p(e, d | f )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where f is the source sentence, e ranges over target sentences, and d ranges over PSCFG derivations (synchronous trees). This is often described as an approximation to the most probable translation, argmax e d p(e, d | f ). In this paper, we will describe a technique that aims to find the most probable equivalence class of unlabeled derivations, rather than a single labeled derivation, reducing the fragmentation problem. Solving this problem exactly is still an NP-hard consensus problem, but we provide approximations that build on well-known PSCFG decoding methods. Our model falls somewhere between PSCFGs that extract nonterminal symbols from parse trees and treat them as part of the derivation (Zollmann and Venugopal, 2006) and unlabeled hierarchical structures (Chiang, 2005) ; we treat nonterminal labels as random variables chosen at each node, with each (unlabeled) rule expressing \"preferences\" for particular nonterminal labels, learned from data.",
"cite_spans": [
{
"start": 704,
"end": 734,
"text": "(Zollmann and Venugopal, 2006)",
"ref_id": "BIBREF17"
},
{
"start": 773,
"end": 787,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. In Section 2, we summarize the use of PSCFG grammars for translation. We describe our model (Section 3). Section 4 explains the preference-related calculations, and Section 5 addresses decoding. Experimental results using preference grammars in a loglinear translation model are presented for two standard Chinese-to-English tasks in Section 6. We review related work (Section 7) and conclude.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Probabilistic synchronous context-free grammars (PSCFGs) are defined by a source terminal set (source vocabulary) T S , a target terminal set (target vocabulary) T T , a shared nonterminal set N and a set R of rules of the form: X \u2192 \u03b3, \u03b1, w where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "\u2022 X \u2208 N is a labeled nonterminal referred to as the left-hand-side of the rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "\u2022 \u03b3 \u2208 (N \u222a T S ) * is the source side of the rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "\u2022 \u03b1 \u2208 (N \u222a T T ) * is the target side of the rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "\u2022 w \u2208 [0, \u221e) is a nonnegative real-valued weight assigned to the rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "For visual clarity, we will use the # character to separate the source side of the rule \u03b3 from the target side \u03b1. PSCFG rules also have an implied one-toone mapping between nonterminal symbols in \u03b3 and nonterminals symbols in \u03b1. Chiang (2005) , Zollmann and Venugopal (2006) and Galley et al. (2006) all use parameterizations of this PSCFG formalism 1 . Given a source sentence f and a PSCFG G, the translation task can be expressed similarly to monolingual parsing with a PCFG. We aim to find the most likely derivation d of the input source sentence and read off the English translation, identified by composing \u03b1 from each rule used in the derivation. This search for the most likely translation under the MAP approximation can be defined as:",
"cite_spans": [
{
"start": 229,
"end": 242,
"text": "Chiang (2005)",
"ref_id": "BIBREF2"
},
{
"start": 258,
"end": 274,
"text": "Venugopal (2006)",
"ref_id": "BIBREF17"
},
{
"start": 279,
"end": 299,
"text": "Galley et al. (2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e = tgt argmax d\u2208D(G):src(d)=f p(d)",
"eq_num": "(1)"
}
],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "where tgt(d) is the target-side yield of a derivation d, and D(G) is the set of G's derivations. Using an n-gram language model to score derivations and rule labels to constraint the rules that form derivations, we define p(d) as log-linear model in terms of the rules r \u2208 R used in d as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "p(d) = p LM (tgt(d)) \u03bb 0 \u00d7 m i=1 p i (d) \u03bb i \u00d7p syn (d) \u03bb m+1 /Z( \u03bb) p i (d) = r\u2208R h i (r) freq(r;d) (2) p syn (d) = 1 if d respects label constraints 0 otherwise (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "\u03bb = \u03bb 0 \u2022 \u2022 \u2022 \u03bb m+1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "are weights that reflect the relative importance of features in the model. The features include the n-gram language model (LM) score of the target yield sequence, a collection of m rule feature functions h i : R \u2192 R \u22650 , and a \"syntax\" feature that (redundantly) requires every nonterminal token to be expanded by a rule with that nonterminal on its left-hand side. freq(r; d) denotes the frequency of the rule r in the derivation d. Note that \u03bb m+1 can be effectively ignored when p syn is defined as in Equation 3. Z( \u03bb) is a normalization constant that does not need to be computed during search under the argmax search criterion in Equation 1. Feature weights \u03bb are trained discriminatively in concert with the language model weight to maximize the BLEU (Papineni et al., 2002) automatic evaluation metric via Minimum Error Rate Training (MERT) (Och, 2003) .",
"cite_spans": [
{
"start": 758,
"end": 781,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF11"
},
{
"start": 849,
"end": 860,
"text": "(Och, 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "We use the open-source PSCFG rule extraction framework and decoder from Zollmann et al. (2008) as the framework for our experiments. The asymptotic runtime of this decoder is:",
"cite_spans": [
{
"start": 72,
"end": 94,
"text": "Zollmann et al. (2008)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "O |f | 3 |N ||T T | 2(n\u22121) K (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "where K is the maximum number of nonterminal symbols per rule, |f | the source sentence length, and n is the order of the n-gram LM that is used to compute p LM . This constant factor in Equation 4 arises from the dynamic programming item structure used to perform search under this model. Using notation from Chiang (2007) , the corresponding item structure is:",
"cite_spans": [
{
"start": 310,
"end": 323,
"text": "Chiang (2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "[X, i, j, q(\u03b1)] : w (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "where X is the nonterminal label of a derivation, i, j define a span in the source sentence, and q(\u03b1) maintains state required to compute p LM (\u03b1). Under the MAP criterion we can discard derivations of lower weight that share this item structure, but in practice we often require additional lossy pruning to limit the number of items produced. The Syntax-Augmented MT model of Zollmann and Venugopal (2006) , for instance, produces a very large nonterminal set using \"slash\" (NP/NN \u2192 the great) and \"plus\" labels (NP+VB \u2192 she went) to assign syntactically motivated labels for rules whose target words do not correspond to constituents in phrase structure parse trees. These labels lead to fragmentation of probability across many derivations for the same target sentence, worsening the impact of the MAP approximation. In this work we address the increased fragmentation resulting from rules with labeled nonterminals compared to unlabeled rules (Chiang, 2005) .",
"cite_spans": [
{
"start": 377,
"end": 406,
"text": "Zollmann and Venugopal (2006)",
"ref_id": "BIBREF17"
},
{
"start": 947,
"end": 961,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PSCFGs for Machine Translation",
"sec_num": "2"
},
{
"text": "We extend the PSCFG formalism to include soft \"label preferences\" for unlabeled rules that correspond to alternative labelings that have been encountered in training data for the unlabeled rule form. These preferences, estimated via relative frequency counts from rule occurrence data, are used to estimate the feature p syn (d), the probability that an unlabeled derivation can be generated under traditional syntactic constraints. In classic PSCFG, p syn (d) enforces a hard syntactic constraint (Equation 3). In our approach, label preferences influence the value of p syn (d).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preference Grammars",
"sec_num": "3"
},
{
"text": "Consider the following labeled Chinese-to-English PSCFG rules:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivating example",
"sec_num": "3.1"
},
{
"text": "(4) S \u2192 ( \u00ea \u00fd VB 1 # a place where I can VB 1 (3) S \u2192 ( \u00ea \u00fd VP 1 # a place where I can VP 1 (2) SBAR \u2192 ( \u00ea \u00fd VP 1 # a place where I can VP 1 (1) FRAG \u2192 ( \u00ea \u00fd AUX 1 # a place where I can AUX 1 (8) VB \u2192 m # eat (1) VP \u2192 m # eat (1) NP \u2192 m # eat (10) NN \u2192 m # dish",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivating example",
"sec_num": "3.1"
},
{
"text": "where the numbers are frequencies of the rule from the training corpus. In classical PSCFG we can think of the nonterminals mentioned in the rules as hard constraints on which rules can be used to expand a particular node; e.g., a VP can only be expanded by a VP rule. In Equation 2, p syn (d) explicitly enforces this hard constraint. Instead, we propose softening these constraints. In the rules below, labels are represented as soft preferences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivating example",
"sec_num": "3.1"
},
{
"text": "(10) X \u2192 ( \u00ea \u00fd X 1 # a place where I can X 1 \uf8f1 \uf8f2 \uf8f3 p(H 0 = S, H 1 = VB | r) = 0.4 p(H 0 = S, H 1 = VP | r) = 0.3 p(H 0 = SBAR, H 1 = VP | r) = 0.2 p(H 0 = FRAG, H 1 = AUX | r) = 0.1 \uf8fc \uf8fd \uf8fe (10) X \u2192 m # eat p(H 0 = VB | r) = 0.8 p(H 0 = VP | r) = 0.1 p(H 0 = NP | r) = 0.1 (10) X \u2192 m # dish { p(H 0 = NN | r) = 1.0 }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivating example",
"sec_num": "3.1"
},
{
"text": "Each unlabeled form of the rule has an associated distribution over labels for the nonterminals referenced in the rule; the labels are random variables H i , with H 0 the left-hand-side label. These unlabeled rule forms are simply packed representations of the original labeled PSCFG rules. In addition to the usual features h i (r) for each rule, estimated based on unlabeled rule frequencies, we now have label preference distributions. These are estimated as relative frequencies from the labelings of the base, unlabeled rule. Our primary contribution is how we compute p syn (d)-the probability that an unlabeled derivation adheres to traditional syntactic constraints-for derivations built from preference grammar rules. By using p syn (d) as a feature in the log-linear model, we allow the MERT framework to evaluate the importance of syntactic structure relative to other features. The example rules above highlight the potential for p syn (d) to affect the choice of translation. The translation of the Chinese word sequence ( \u00ea \u00fd m can be performed by expanding the nonterminal in the rule \"a place where I can X 1 \" with either \"eat\" or \"dish.\" A hierarchical system (Chiang, 2005) would allow either expansion, relying on features like p LM to select the best translation since both expansions occurred the same number of times in the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivating example",
"sec_num": "3.1"
},
{
"text": "A richly-labeled PSCFG as in Zollmann and Venugopal (2006) would immediately reject the rule generating \"dish\" due to hard label matching constraints, but would produce three identical, competing derivations. Two of these derivations would produce S as a root symbol, while one derivation would produce SBAR. The two S-labeled derivations compete, rather than reinforce the choice of the word \"eat,\" which they both make. They will also compete for consideration by any decoder that prunes derivations to keep runtime down.",
"cite_spans": [
{
"start": 29,
"end": 58,
"text": "Zollmann and Venugopal (2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivating example",
"sec_num": "3.1"
},
{
"text": "The rule preferences indicate that VB and VP are both valid labels for the rule translating to \"eat\", and both of these labels are compatible with the arguments expected by \"a place where I can X 1 \". Alternatively, \"dish\" produces a NN label which is not compatible with the arguments of this higherup rule. We design p syn (d) to reflect compatibility between two rules (one expanding a right-hand side nonterminal in the other), based on label preference distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivating example",
"sec_num": "3.1"
},
{
"text": "Probabilistic synchronous context-free preference grammars are defined as PSCFGs with the following additional elements:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "\u2022 H: a set of implicit labels, not to be confused with the explicit label set N .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "\u2022 \u03c0: H \u2192 N , a function that associates each implicit label with a single explicit label. We can therefore think of H symbols as refinements of the nonterminals in N (Matsusaki et al., 2005) .",
"cite_spans": [
{
"start": 166,
"end": 190,
"text": "(Matsusaki et al., 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "\u2022 For each rule r, we define a probability distribution over vectors h of implicit label bindings for its nonterminals, denoted p pref ( h | r). h includes bindings for the left-hand side nonterminal (h 0 ) as well as each right-hand side non-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "terminal (h 1 , ..., h | h| ). Each h i \u2208 H.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "When N , H are defined to include just a single generic symbol as in (Chiang, 2005) , we produce the unlabeled grammar discussed above. In this work, we define",
"cite_spans": [
{
"start": 69,
"end": 83,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "\u2022 N = {S, X} \u2022 H = {NP, DT, NN \u2022 \u2022 \u2022 } = N SAMT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "where N corresponds to the generic labels of Chiang (2005) and H corresponds to the syntactically motivated SAMT labels from (Zollmann and Venugopal, 2006) , and \u03c0 maps all elements of H to X. We will use hargs(r) to denote the set of all h = h 0 , h 1 , ..., h k \u2208 H k+1 that are valid bindings for the rule with nonzero preference probability.",
"cite_spans": [
{
"start": 45,
"end": 58,
"text": "Chiang (2005)",
"ref_id": "BIBREF2"
},
{
"start": 125,
"end": 155,
"text": "(Zollmann and Venugopal, 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "The preference distributions p pref from each rule used in d are used to compute p syn (d) as described next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "4 Computing feature p syn (d) Let us view a derivation d as a collection of nonterminal tokens n j , j \u2208 {1, ..., |d|}. Each n j takes an explicit label in N . The score p syn (d) is a product, with one factor per n j in the derivation d:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p syn (d) = |d| j=1 \u03c6 j",
"eq_num": "(6)"
}
],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "Each \u03c6 j factor considers the two rules that n j participates in. We will refer to the rule above nonterminal token n j as r j (the nonterminal is a child in this rule) and the rule that expands nonterminal token j as r j . The intuition is that derivations in which these two rules agree (at each j) about the implicit label for n j , in H are preferable to derivations in which they do not. Rather than making a decision about the implicit label, we want to reward p syn when r j and r j are consistent. Our way of measuring this consistency is an inner product of preference distributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c6 j \u221d h\u2208H p pref (h | r j )p pref (h | r j )",
"eq_num": "(7)"
}
],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "This is not quite the whole story, because",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "p pref (\u2022 | r)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "is defined as a joint distribution of all the implicit labels within a rule; the implicit labels are not independent of each other. Indeed, we want the implicit labels within each rule to be mutually consistent, i.e., to correspond to one of the rule's preferred labelings, for both hargs(r) and hargs(r).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "Our approach to calculating p syn within the dynamic programming algorithm is to recursively calculate preferences for each chart item based on (a) the smaller items used to construct the item and (b) the rule that permits combination of the smaller items into the larger one. We describe how the preferences for chart items are calculated. Let a chart item be denoted [X, i, j, u, ...] where X \u2208 N and i and j are positions in the source sentence, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "u : {h \u2208 H | \u03c0(h) = X} \u2192 [0, 1]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "(where h u(h) = 1) denotes a distribution over possible X-refinement labels. We will refer to it below as the left-hand-side preference distribution. Additional information (such as language model state) may also be included; it is not relevant here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "The simplest case is for a nonterminal token n j that has no nonterminal children. Here the left-handside preference distribution is simply given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "u(h) = p pref (h | r j ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "and we define the p syn factor to be \u03c6 j = 1. Now consider the dynamic programming step of combining an already-built item [X, i, j, u, ...] rooted by explicit nonterminal X, spanning source sentence positions i to j, with left-hand-side preference distribution u, to build a larger item rooted by Y through a rule r = Y \u2192 \u03b3X 1 \u03b3 , \u03b1X 1 \u03b1 , w with preferences p pref (\u2022 | r). ",
"cite_spans": [
{
"start": 123,
"end": 140,
"text": "[X, i, j, u, ...]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "v(h) =\u1e7d (h) h \u1e7d(h ) where (8) v(h) = h \u2208H: h,h \u2208hargs(r) p pref ( h, h | r) \u00d7 u(h )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "Renormalizing keeps the preference vectors on the same scale as those in the rules. The p syn factor \u03c6, which is factored into the value of the new item, is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c6 = h \u2208H: h,h \u2208hargs(r) u(h )",
"eq_num": "(9)"
}
],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "so that the value considered for the new item is w \u00d7 \u03c6 \u00d7 ..., where factors relating to p LM , for example, may also be included. Coming back to our example, if we let r be the leaf rule producing \"eat\" at shared nonterminal n 1 , we generate an item with:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "u = u(VB) = 0.8, u(VP) = 0.1, u(NP) = 0.1 \u03c6 1 = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "Combining this item with X \u2192 ( \u00ea \u00fd X 1 # a place where I can X 1 as r 2 at nonterminal n 2 generates a new target item with translation \"a place where I can eat\", \u03c6 2 = 0.9 and v as calculated in Fig. 1 . In contrast, \u03c6 2 = 0 for the derivation where r is the leaf rule that produces \"dish\". This calculation can be seen as a kind of singlepass, bottom-up message passing inference method embedded within the usual dynamic programming search.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 202,
"text": "Fig. 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Formal definition",
"sec_num": "3.2"
},
{
"text": "As defined above, accurately computing p syn (d) requires extending the chart item structure with u. For models that use the n-gram LM feature, the item structure would be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Approximations",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "[X, i, j, q(\u03b1), u] : w",
"eq_num": "(10)"
}
],
"section": "Decoding Approximations",
"sec_num": "5"
},
{
"text": "Since u effectively summarizes the choice of rules in a derivation, this extension would partition the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Approximations",
"sec_num": "5"
},
{
"text": "T * T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Approximations",
"sec_num": "5"
},
{
"text": "If there are multiple nonterminals on the right-hand side of the rule, we sum over the longer sequences in hargs(r) and include appropriate values from the additional \"child\" items' preference vectors in the product. search space further. To prevent this partitioning, we follow the approach of Venugopal et al. (2007) . We keep track of u for the best performing derivation from the set of derivations that share [X, i, j, q(\u03b1) ] in a first-pass decoding. In a second top-down pass similar to Huang and Chiang (2007) , we can recalculate p syn (d) for alternative derivations in the hypergraph; potentially correcting search errors made in the first pass.",
"cite_spans": [
{
"start": 295,
"end": 318,
"text": "Venugopal et al. (2007)",
"ref_id": "BIBREF16"
},
{
"start": 414,
"end": 428,
"text": "[X, i, j, q(\u03b1)",
"ref_id": null
},
{
"start": 494,
"end": 517,
"text": "Huang and Chiang (2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Approximations",
"sec_num": "5"
},
{
"text": "We face another significant practical challenge during decoding. In real data conditions, the size of the preference vector for a single rule can be very high, especially for rules that include multiple nonterminal symbols that are located on the left and right boundaries of \u03b3. For example, the Chineseto-English rule X \u2192 X 1 \" X 2 # X 1 's X 2 has over 24K elements in hargs(r) when learned for the medium-sized NIST task used below. In order to limit the explosive growth of nonterminals during decoding for both memory and runtime reasons, we define the following label pruning parameters:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Approximations",
"sec_num": "5"
},
{
"text": "\u2022 \u03b2 R : This parameter limits the size of hargs(r) to the \u03b2 R top-scoring preferences, defaulting other values to zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Approximations",
"sec_num": "5"
},
{
"text": "\u2022 \u03b2 L : This parameter is the same as \u03b2 R but applied only to rules with no nonterminals. The stricter of \u03b2 L and \u03b2 R is applied if both thresholds apply. \u2022 \u03b2 P : This parameter limits the number labels in item preference vectors (Equation 8) to the \u03b2 P most likely labels during decoding, defaulting other preferences to zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Approximations",
"sec_num": "5"
},
{
"text": "We evaluate our preference grammar model on small (IWSLT) and medium (NIST) data Chineseto-English translation tasks (described in Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Empirical Results",
"sec_num": "6"
},
{
"text": "IWSLT is a limited domain, limited resource task (Paul, 2006) , while NIST is a broadcast news task with wide genre and domain coverage. We use a subset of the full training data (67M words of English text) from the annual NIST MT Evaluation. Development corpora are used to train model parameters via MERT. We use a variant of MERT that prefers sparse solutions where \u03bb i = 0 for as many features as possible. At each MERT iteration, a subset of features \u03bb are assigned 0 weight and optimization is repeated. If the resulting BLEU score is not lower, these features are left at zero.",
"cite_spans": [
{
"start": 49,
"end": 61,
"text": "(Paul, 2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Results",
"sec_num": "6"
},
{
"text": "All systems are built on the SAMT framework described in Zollmann et al. (2008) , using a trigram LM during search and the full-order LM during a second hypergraph rescoring pass. Reordering limits are set to 10 words for all systems. Pruning parameters during decoding limit the number of derivations at each source span to 300.",
"cite_spans": [
{
"start": 57,
"end": 79,
"text": "Zollmann et al. (2008)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Results",
"sec_num": "6"
},
{
"text": "The system \"Hier.\" uses a grammar with a single nonterminal label as in Chiang (2005) . The system \"Syntax\" applies the grammar from Zollmann and Venugopal (2006) that generates a large number of syntactically motivated nonterminal labels. For the NIST task, rare rules are discarded based on their frequency in the training data. Purely lexical rules (that include no terminal symbols) that occur less than 2 times, or non-lexical rules that occur less than 4 times are discarded.",
"cite_spans": [
{
"start": 72,
"end": 85,
"text": "Chiang (2005)",
"ref_id": "BIBREF2"
},
{
"start": 133,
"end": 162,
"text": "Zollmann and Venugopal (2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Results",
"sec_num": "6"
},
{
"text": "IWSLT task: We evaluate the preference grammar system \"Pref.\" with parameters \u03b2 R = 100, \u03b2 L = 5, \u03b2 P = 2. Results comparing systems Pref. to Hier. and Syntax are shown in Table 2 . Automatic evaluation results using the preference grammar translation model are positive. The preference grammar system shows improvements over both the Hier. and Syntax based systems on both unseen evaluation sets IWSLT 2007 and 2008. The improvements are clearest on the BLEU metric (matching the MERT training criteria). On 2007 test data, Pref. shows a 1.2-point improvement over Hier., while on the 2008 data, there is a 0.6-point improvement. For the IWSLT task, we report additional au-System Name Words in Target Text LM singleton 1-n-grams (n) Dev. Test IWSLT 632K 431K (5) IWSLT06 IWSLT07,08 NIST 67M 102M (4) MT05 MT06 Table 1 : Training data configurations used to evaluate preference grammars. The number of words in the target text and the number of singleton 1-n-grams represented in the complete model are the defining statistics that characterize the scale of each task. For each LM we also indicate the order of the n-gram model. tomatic evaluation metrics that generally rank the Pref. system higher than Hier. and Syntax. As a further confirmation, our feature selection based MERT chooses to retain \u03bb m+1 in the model. While the IWSLT results are promising, we perform a more complete evaluation on the NIST translation task. NIST task: This task generates much larger rule preference vectors than the IWSLT task simply due to the size of the training corpora. We build systems with both \u03b2 R = 100, 10 varying \u03b2 P . Varying \u03b2 P isolates the relative impact of propagating alternative nonterminal labels within the preference grammar model. \u03b2 L = 5 for all NIST systems. Parameters \u03bb are trained via MERT on the \u03b2 R = 100, \u03b2 L = 5, \u03b2 P = 2 system. BLEU scores for each preference grammar and baseline system are shown in Table 3 , along with translation times on the test corpus. We also report length penalties to show that improvements are not simply due to better tuning of output length.",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 179,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 812,
"end": 819,
"text": "Table 1",
"ref_id": null
},
{
"start": 1923,
"end": 1930,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Empirical Results",
"sec_num": "6"
},
{
"text": "The preference grammar systems outperform the Hier. baseline by 0.5 points on development data, and upto 0.8 points on unseen test data. While systems with \u03b2 R = 100 take significantly longer to translate the test data than Hier., setting \u03b2 R = 10 takes approximately as long as the Syntax based system but produces better slightly better results (0.3 points).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "The improvements in translation quality with the preference grammar are encouraging, but how much of this improvement can simply be attributed to MERT finding a better local optimum for parameters \u03bb? To answer this question, we use parameters \u03bb optimized by MERT for the preference grammar system to run a purely hierarchical system, denoted Hier.(\u03bb ), which ignores the value of \u03bb m+1 during decoding. While almost half of the improvement comes from better parameters learned via MERT for the preference grammar systems, 0.5 points can be still be attributed purely to the feature p syn . In addition, MERT does not set parameter \u03bb m+1 to 0, corroborating the value of the p syn feature again. Note that Hier.(\u03bb ) achieves better scores than the Hier. system which was trained via MERT without p syn . This highlights the local nature of MERT parameter search, but also points to the possibility that training with the feature p syn produced a more diverse derivation space, resulting in better parameters \u03bb. We see a very small improvement (0.1 point) by allowing the runtime propagation of more than 1 nonterminal label in the left-hand side posterior distribution, but the improvement doesn't extend to \u03b2 P = 5. Improved integration of the feature p syn (d) into decoding might help to widen this gap. 34.6 (0.99) 32.6 (0.95) 3:00 \u03b2 P = 5 -32.5 (0.95) 3:20 Preference Grammar: \u03b2 R = 10 \u03b2 P = 1 -32.5 (0.95) 1:03 \u03b2 P = 2 -32.6 (0.95) 1:10 \u03b2 P = 5 -32.5 (0.95) 1:10 Table 3 : Translation quality and test set translation time (using 50 machines with 2 tasks per machine) measured by the BLEU metric for the NIST task. NIST 2006 is used as the development (Dev.) corpus and NIST 2007 is used as the unseen evaluation corpus (Test). Dev. scores are reported for systems that have been separately MERT trained, Pref. systems share parameters from a single MERT training. Systems are described in the text.",
"cite_spans": [],
"ref_spans": [
{
"start": 1468,
"end": 1475,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "There have been significant efforts in the both the monolingual parsing and machine translation literature to address the impact of the MAP approximation and the choice of labels in their respective models; we survey the work most closely related to our approach. May and Knight (2006) extract nbest lists containing unique translations rather than unique derivations, while Kumar and Byrne (2004) use the Minimum Bayes Risk decision rule to select the lowest risk (highest BLEU score) translation rather than derivation from an n-best list. Tromble et al. (2008) extend this work to lattice structures. All of these approaches only marginalize over alternative candidate derivations generated by a MAPdriven decoding process. More recently, work by Blunsom et al. (2007) propose a purely discriminative model whose decoding step approximates the selection of the most likely translation via beam search. Matsusaki et al. (2005) and Petrov et al. (2006) propose automatically learning annotations that add information to categories to improve monolingual parsing quality. Since the parsing task requires selecting the most non-annotated tree, the an-notations add an additional level of structure that must be marginalized during search. They demonstrate improvements in parse quality only when a variational approximation is used to select the most likely unannotated tree rather than simply stripping annotations from the MAP annotated tree. In our work, we focused on approximating the selection of the most likely unlabeled derivation during search, rather than as a post-processing operation; the methods described above might improve this approximation, at some computational expense.",
"cite_spans": [
{
"start": 264,
"end": 285,
"text": "May and Knight (2006)",
"ref_id": "BIBREF9"
},
{
"start": 375,
"end": 397,
"text": "Kumar and Byrne (2004)",
"ref_id": "BIBREF7"
},
{
"start": 542,
"end": 563,
"text": "Tromble et al. (2008)",
"ref_id": "BIBREF15"
},
{
"start": 750,
"end": 771,
"text": "Blunsom et al. (2007)",
"ref_id": "BIBREF0"
},
{
"start": 905,
"end": 928,
"text": "Matsusaki et al. (2005)",
"ref_id": "BIBREF8"
},
{
"start": 933,
"end": 953,
"text": "Petrov et al. (2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "We have proposed a novel grammar formalism that replaces hard syntactic constraints with \"soft\" preferences. These preferences are used to compute a machine translation feature (p syn (d)) that scores unlabeled derivations, taking into account traditional syntactic constraints. Representing syntactic constraints as a feature allows MERT to train the corresponding weight for this feature relative to others in the model, allowing systems to learn the relative importance of labels for particular resource and language scenarios as well as for alternative approaches to labeling PSCFG rules. This approach takes a step toward addressing the fragmentation problems of decoding based on maximum-weighted derivations, by summing the contributions of compatible label configurations rather than forcing them to compete. We have suggested an efficient technique to approximate p syn (d) that takes advantage of a natural factoring of derivation scores. Our approach results in improvements in translation quality on small and medium resource translation tasks. In future work we plan to focus on methods to improve on the integration of the p syn (d) feature during decoding and techniques that allow us consider more of the search space through less pruning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "Galley et al. (2006) rules are formally defined as tree transducers but have equivalent PSCFG forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We assume for the discussion that \u03b1, \u03b1 \u2208 T * S and \u03b3, \u03b3 \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We appreciate helpful comments from three anonymous reviewers. Venugopal and Zollmann were supported by a Google Research Award. Smith was supported by NSF grant IIS-0836431.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A discriminative latent variable model for statistical machine translation",
"authors": [
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phil Blunsom, Trevor Cohn, and Miles Osborne. 2007. A discriminative latent variable model for statistical machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguis- tics (ACL).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Computational complexity of problems on probabilistic grammars and transducers",
"authors": [
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "De La Higuera",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of the 5th International Colloquium on Grammatical Inference: Algorithms and Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisco Casacuberta and Colin de la Higuera. 2000. Computational complexity of problems on probabilis- tic grammars and transducers. In Proc. of the 5th Inter- national Colloquium on Grammatical Inference: Al- gorithms and Applications.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Annual Meeting of the Association for Compuational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the Annual Meeting of the Association for Compua- tional Linguistics (ACL).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Hierarchical phrase based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical phrase based transla- tion. Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Scalable inferences and training of context-rich syntax translation models",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Graehl",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Deneefe",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Thayer",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Annual Meeting of the Association for Compuational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inferences and training of context-rich syntax translation models. In Proceed- ings of the Annual Meeting of the Association for Com- puational Linguistics (ACL), Sydney, Australia.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Forest rescoring: Faster decoding with integrated language models",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Annual Meeting of the Association for Compuational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proceedings of the Annual Meeting of the Association for Compuational Linguistics (ACL).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Decoding complexity in wordreplacement translation models",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Knight. 1999. Decoding complexity in word- replacement translation models. Computational Lin- guistics, Squibs and Discussion.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Minimum Bayes-risk decoding for statistical machine translation",
"authors": [
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference (HLT/NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shankar Kumar and William Byrne. 2004. Min- imum Bayes-risk decoding for statistical machine translation. In Proceedings of the Human Lan- guage Technology and North American Association for Computational Linguistics Conference (HLT/NAACL), Boston,MA, May 27-June 1.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Probabilistic CFG with latent annotations",
"authors": [
{
"first": "Takuya",
"middle": [],
"last": "Matsusaki",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Junichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takuya Matsusaki, Yusuke Miyao, and Junichi Tsujii. 2005. Probabilistic CFG with latent annotations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A better N-best list: Practical determinization of weighted finite tree automata",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics Conference (HLT/NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan May and Kevin Knight. 2006. A better N-best list: Practical determinization of weighted finite tree automata. In Proceedings of the Human Language Technology Conference of the North American Chap- ter of the Association for Computational Linguistics Conference (HLT/NAACL).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Annual Meeting of the Association for Compuational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the Annual Meeting of the Association for Compuational Linguistics (ACL).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Annual Meeting of the Association for Compuational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proceedings of the Annual Meeting of the Association for Compuational Linguistics (ACL).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Overview of the IWSLT 2006 evaluation campaign",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Paul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the International Workshop on Spoken Language Translation (IWSLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Paul. 2006. Overview of the IWSLT 2006 eval- uation campaign. In Proceedings of the International Workshop on Spoken Language Translation (IWSLT).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning accurate, compact, and interpretable tree annotation",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Romain",
"middle": [],
"last": "Thibaux",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Annual Meeting of the Association for Compuational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and inter- pretable tree annotation. In Proceedings of the Annual Meeting of the Association for Compuational Linguis- tics (ACL).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Computational complexity of probabilistic disambiguation",
"authors": [
{
"first": "Khalil",
"middle": [],
"last": "Sima",
"suffix": ""
},
{
"first": "'",
"middle": [],
"last": "An",
"suffix": ""
}
],
"year": 2002,
"venue": "Grammars",
"volume": "5",
"issue": "2",
"pages": "125--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khalil Sima'an. 2002. Computational complexity of probabilistic disambiguation. Grammars, 5(2):125- 151.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Lattice minimum Bayes-risk decoding for statistical machine translation",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Tromble, Shankar Kumar, Franz Och, and Wolfgang Macherey. 2008. Lattice minimum Bayes-risk decod- ing for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An efficient two-pass approach to Synchronous-CFG driven statistical MT",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Zollmann",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics Conference (HLT/NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Venugopal, Andreas Zollmann, and Stephan Vo- gel. 2007. An efficient two-pass approach to Synchronous-CFG driven statistical MT. In Proceed- ings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics Conference (HLT/NAACL).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Syntax augmented machine translation via chart parsing",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Zollmann",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Workshop on Statistical Machine Translation, HLT/NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proceedings of the Workshop on Statistical Machine Translation, HLT/NAACL, New York, June.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A systematic comparison of phrasebased, hierarchical and syntax-augmented statistical MT",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Zollmann",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "Jay",
"middle": [],
"last": "Ponte",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Zollmann, Ashish Venugopal, Franz J. Och, and Jay Ponte. 2008. A systematic comparison of phrase- based, hierarchical and syntax-augmented statistical MT. In Proceedings of the Conference on Computa- tional Linguistics (COLING).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "2 The new item will have signature [Y, i \u2212 |\u03b3|, j + |\u03b3 |, v, ...]. The left-handside preferences v for the new item are calculated as follows:"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "\u1e7d(S) = p pref ( h = S, h = VB | r)u(VB) + p pref ( h = S, h = VP | r)u(VP) = (0.4 \u00d7 0.8) + (0.3 \u00d7 0.1) = 0.35 v(SBAR) = p( h = SBAR, h = VP | r)u(VP) = (0.2 \u00d7 0.1) = 0.02 v = v(S) = 0.35/(\u1e7d(S) +\u1e7d(SBAR)), v(SBAR) = 0.02/\u1e7d(S) +\u1e7d(SBAR) = v(S) = 0.35/0.37, v(SBAR) = 0.02/0.37 \u03c6 2 = u(VB) + u(VP) = 0.8 + 0.1 = 0.9 Calculating v and \u03c6 2 for the running example."
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Translation quality metrics on the IWSLT translation task, with IWSLT 2006 as the development corpora, and IWSLT 2007 and 2008 as test corpora. Each metric is annotated with an \u2191 if increases in the metric value correspond to increase in translation quality and a \u2193 if the opposite is true. We also list length penalties for the BLEU metric to show that improvements are not due to length optimizations alone."
}
}
}
}