Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D15-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:29:20.380592Z"
},
"title": "Reordering Grammar Induction",
"authors": [
{
"first": "Milo\u0161",
"middle": [],
"last": "Stanojevi\u0107",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ILLC University of Amsterdam",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Khalil",
"middle": [],
"last": "Sima'an",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ILLC University of Amsterdam",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a novel approach for unsupervised induction of a Reordering Grammar using a modified form of permutation trees (Zhang and Gildea, 2007), which we apply to preordering in phrase-based machine translation. Unlike previous approaches, we induce in one step both the hierarchical structure and the transduction function over it from word-aligned parallel corpora. Furthermore, our model (1) handles non-ITG reordering patterns (up to 5-ary branching), (2) is learned from all derivations by treating not only labeling but also bracketing as latent variable, (3) is entirely unlexicalized at the level of reordering rules, and (4) requires no linguistic annotation. Our model is evaluated both for accuracy in predicting target order, and for its impact on translation quality. We report significant performance gains over phrase reordering, and over two known preordering baselines for English-Japanese.",
"pdf_parse": {
"paper_id": "D15-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a novel approach for unsupervised induction of a Reordering Grammar using a modified form of permutation trees (Zhang and Gildea, 2007), which we apply to preordering in phrase-based machine translation. Unlike previous approaches, we induce in one step both the hierarchical structure and the transduction function over it from word-aligned parallel corpora. Furthermore, our model (1) handles non-ITG reordering patterns (up to 5-ary branching), (2) is learned from all derivations by treating not only labeling but also bracketing as latent variable, (3) is entirely unlexicalized at the level of reordering rules, and (4) requires no linguistic annotation. Our model is evaluated both for accuracy in predicting target order, and for its impact on translation quality. We report significant performance gains over phrase reordering, and over two known preordering baselines for English-Japanese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Preordering (Collins et al., 2005) aims at permuting the words of a source sentence s into a new order\u015b, hopefully close to a plausible target word order. Preordering is often used to bridge long distance reorderings (e.g., in Japanese-or German-English), before applying phrase-based models (Koehn et al., 2007) . Preordering is often broken down into two steps: finding a suitable tree structure, and then finding a transduction function over it. A common approach is to use monolingual syntactic trees and focus on finding a transduction function of the sibling subtrees under the nodes (Lerner and Petrov, 2013; Xia and Mccord, 2004) . The (direct correspondence) assumption underlying this approach is that permuting the siblings of nodes in a source syntactic tree can produce a plausible target order. An alternative approach creates reordering rules manually and then learns the right structure for applying these rules (Katz-Brown et al., 2011) . Others attempt learning the transduction structure and the transduction function in two separate, consecutive steps (DeNero and Uszkoreit, 2011). Here we address the challenge of learning both the trees and the transduction functions jointly, in one fell swoop, from word-aligned parallel corpora.",
"cite_spans": [
{
"start": 12,
"end": 34,
"text": "(Collins et al., 2005)",
"ref_id": "BIBREF7"
},
{
"start": 292,
"end": 312,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 590,
"end": 615,
"text": "(Lerner and Petrov, 2013;",
"ref_id": "BIBREF22"
},
{
"start": 616,
"end": 637,
"text": "Xia and Mccord, 2004)",
"ref_id": "BIBREF39"
},
{
"start": 928,
"end": 953,
"text": "(Katz-Brown et al., 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Learning both trees and transductions jointly raises two questions. How to obtain suitable trees for the source sentence and how to learn a distribution over random variables specifically aimed at reordering in a hierarchical model? In this work we solve both challenges by using the factorizations of permutations into Permutation Trees (PETs) (Zhang and Gildea, 2007) . As we explain next, PETs can be crucial for exposing the hierarchical reordering patterns found in wordalignments.",
"cite_spans": [
{
"start": 345,
"end": 369,
"text": "(Zhang and Gildea, 2007)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We obtain permutations in the training data by segmenting every word-aligned source-target pair into minimal phrase pairs; the resulting alignment between minimal phrases is written as a permutation (1:1 and onto) on the source side. Every permutation can be factorized into a forest of PETs (over the source sentences) which we use as a latent treebank for training a Probabilistic Context-Free Grammar (PCFG) tailor made for preordering as we explain next. Figure 1 shows two alternative PETs for the same permutation over minimal phrases. The nodes have labels (like P 3142) which stand for local permutations (called prime permutation) over the child nodes; for example, the root label P 3142 stands for prime permutation 3, 1, 4, 2 , which says that the first child of the root becomes 3 rd on the target side, the second becomes 1 st , the third becomes 4 th and the fourth becomes 2 nd . The prime permutations are non-factorizable permutations like 1, 2 , 2, 1 and 2, 4, 1, 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 459,
"end": 467,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We think PETs are suitable for learning preordering for two reasons. Firstly, PETs specify exactly the phrase pairs defined by the permutation. Secondly, every permutation is factorizable into prime permutations only (Albert and Atkinson, 2005) . Therefore, PETs expose maximal sharing between different permutations in terms of both phrases and their reordering. We expect this to be advantageous for learning hierarchical reordering.",
"cite_spans": [
{
"start": 217,
"end": 244,
"text": "(Albert and Atkinson, 2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For learning preordering, we first extract an initial PCFG from the latent treebank of PETs over the source sentences only. We initialize the nonterminal set of this PCFG to the prime permutations decorating the PET nodes. Subsequently we split these coarse labels in the same way as latent variable splitting is learned for treebank parsing (Matsuzaki et al., 2005; Prescher, 2005; Petrov et al., 2006; Saluja et al., 2014) . Unlike treebank parsing, however, our training treebank is latent because it consists of a whole forest of PETs per training instance (s).",
"cite_spans": [
{
"start": 342,
"end": 366,
"text": "(Matsuzaki et al., 2005;",
"ref_id": "BIBREF24"
},
{
"start": 367,
"end": 382,
"text": "Prescher, 2005;",
"ref_id": "BIBREF29"
},
{
"start": 383,
"end": 403,
"text": "Petrov et al., 2006;",
"ref_id": "BIBREF28"
},
{
"start": 404,
"end": 424,
"text": "Saluja et al., 2014)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Learning the splits on a latent treebank of PETs results in a Reordering PCFG which we use to parse input source sentences into split-decorated trees, i.e., the labels are the splits of prime permutations. After parsing s, we map the splits back on their initial prime permutations, and then retrieve a reordered version\u015b of s. In this sense, our latent splits are dedicated to reordering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We face two technical difficulties alien to work on latent PCFGs in treebank parsing. Firstly, as mentioned above, permutations may factorize into more than one PET (a forest) leading to a latent training treebank. 1 And secondly, after we parse a source string s, we are interested in\u015b, the permuted version of s, not in the best derivation/PET. Exact computation is a known NP-Complete problem (Sima'an, 2002) . We solve this by a new Minimum-Bayes Risk decoding approach using Kendall reordering score as loss function, which is an efficient measure over permutations (Birch and Osborne, 2011; Isozaki et al., 2010a) .",
"cite_spans": [
{
"start": 396,
"end": 411,
"text": "(Sima'an, 2002)",
"ref_id": "BIBREF33"
},
{
"start": 571,
"end": 596,
"text": "(Birch and Osborne, 2011;",
"ref_id": "BIBREF1"
},
{
"start": 597,
"end": 619,
"text": "Isozaki et al., 2010a)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, this paper contributes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 A novel latent hierarchical source reordering model working over all derivations of PETs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 A label splitting approach based on PCFGs over minimal phrases as terminals, learned from an ambiguous treebank, where the label splits start out from prime permutations. \u2022 A fast Minimum Bayes Risk decoding over Kendall \u03c4 reordering score for selecting\u015b. We report results for extensive experiments on English-Japanese showing that our Reordering PCFG gives substantial improvements when used as preordering for phrase-based models, outperforming two existing baselines for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We aim at learning a PCFG which we will use for parsing source sentences s into synchronous trees, from which we can obtain a reordered source version\u015b. Since PCFGs are non-synchronous grammars, we will use the nonterminal labels to encode reordering transductions, i.e., this PCFG is implicitly an SCFG. We can do this because s and\u015b are over the same alphabet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PETs and the Hidden Treebank",
"sec_num": "2"
},
{
"text": "Here, we have access only to a word-aligned parallel corpus, not a treebank. The following steps summarize our approach for acquiring a latent treebank and how it is used for learning a Reordering PCFG:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PETs and the Hidden Treebank",
"sec_num": "2"
},
{
"text": "1. Obtain a permutation over minimal phrases from every word-alignment. 2. Obtain a latent treebank of PETs by factorizing the permutations. 3. Extract a PCFG from the PETs with initial nonterminals taken from the PETs. 4. Learn to split the initial nonterminals and estimate rule probabilities. These steps are detailed in the next section, but we will start out with an intuitive exposition of PETs, the latent treebank and the Reordering Grammar. Figure 1 shows examples of how PETs look like -see (Zhang and Gildea, 2007) for algorithmic details. Here we label the nodes with nonterminals which stand for prime permutations from the operators on the PETs. For example, nonterminals P 12, P 21 and P 3142 correspond respectively to reordering transducers 1, 2 , 2, 1 and 3, 1, 4, 2 . A prime permutation on a source node \u00b5 is a transduction dictating how the children of \u00b5 are reordered at the target side, e.g., P 21 inverts the child order. We must stress that any similarity with ITG (Wu, 1997) is restricted to the fact that the straight and inverted operators of ITG are the binary case of prime permutations in PETs (P 12 and P 21). ITGs recognize only the binarizable permutations, which is a major restriction when used on the data: there are many nonbinarizable permutations in actual data (Wellington et al., 2006) . In contrast, our PETs are obtained by factorizing permutations obtained from the data, i.e., they exactly fit the range of prime permutations in the parallel corpus. In practice we limit them to maximum arity 5.",
"cite_spans": [
{
"start": 501,
"end": 525,
"text": "(Zhang and Gildea, 2007)",
"ref_id": "BIBREF40"
},
{
"start": 990,
"end": 1000,
"text": "(Wu, 1997)",
"ref_id": "BIBREF38"
},
{
"start": 1302,
"end": 1327,
"text": "(Wellington et al., 2006)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 450,
"end": 458,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "PETs and the Hidden Treebank",
"sec_num": "2"
},
{
"text": "We can extract PCFG rules from the PETs, e.g., P 21 \u2192 P 12 P 2413. However, these rules are decorated with too coarse labels. A similar problem was encountered in non-lexicalized monolingual parsing, and one solution was to lexicalize the productions (Collins, 2003) using head words. But linguistic heads do not make sense for PETs, so we opt for the alternative approach (Matsuzaki et al., 2005) , which splits the nonterminals and softly percolates the splits through the trees gradually fitting them to the training data. Splitting has a shadow side, however, because it leads to combinatorial explosion in grammar size.",
"cite_spans": [
{
"start": 251,
"end": 266,
"text": "(Collins, 2003)",
"ref_id": "BIBREF8"
},
{
"start": 373,
"end": 397,
"text": "(Matsuzaki et al., 2005)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PETs and the Hidden Treebank",
"sec_num": "2"
},
{
"text": "Suppose for example node P 21 could split into P 21 1 and P 21 2 and similarly P 2413 splits into P 2413 1 and 2413 2 . This means that rule P 21 \u2192 P 12 P 2413 will form eight new rules:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PETs and the Hidden Treebank",
"sec_num": "2"
},
{
"text": "P 21 1 \u2192 P 12 1 P 2413 1 P 21 1 \u2192 P 12 1 P 2413 2 P 21 1 \u2192 P 12 2 P 2413 1 P 21 1 \u2192 P 12 2 P 2413 2 P 21 2 \u2192 P 12 1 P 2413 1 P 21 2 \u2192 P 12 1 P 2413 2 P 21 2 \u2192 P 12 2 P 2413 1 P 21 2 \u2192 P 12 2 P 2413 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PETs and the Hidden Treebank",
"sec_num": "2"
},
{
"text": "Should we want to split each nonterminal into 30 subcategories, then an n-ary rule will split into 30 n+1 new rules, which is prohibitively large. Here we use the \"unary trick\" as in Figure 2 . The superscript on the nonterminals denotes the child position from left to right. For example P 21 2 1 means that this node is a second child, and the mother nonterminal label is P 21 1 . For the running example rule, this gives the following rules:",
"cite_spans": [],
"ref_spans": [
{
"start": 183,
"end": 191,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "PETs and the Hidden Treebank",
"sec_num": "2"
},
{
"text": "P 21 1 \u2192 P 21 1 1 P 21 2 1 P 21 2 \u2192 P 21 1 2 P 21 2 2 P 21 1 1 \u2192 P 12 1 P 21 2 1 \u2192 P 2413 1 P 21 1 1 \u2192 P 12 2 P 21 2 1 \u2192 P 2413 2 P 21 1 2 \u2192 P 12 1 P 21 2 2 \u2192 P 2413 1 P 21 1 2 \u2192 P 12 2 P 21 2 2 \u2192 P 2413 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PETs and the Hidden Treebank",
"sec_num": "2"
},
{
"text": "The unary trick leads to substantial reduction in grammar size, e.g., for arity 5 rules and 30 splits we could have had 30 6 = 729000000 split-rules, but with the unary trick we only have 30+30 2 * 5 = 4530 split rules. The unary trick was used in early lexicalized parsing work (Carroll and Rooth, 1998 Obtaining permutations Given a source sentence s and its alignment a to a target sentence t in the training corpus, we segment s, a, t into a sequence of minimal phrases s m (maximal sequence) such that the reordering between these minimal phrases constitutes a permutation \u03c0 m . We do not extract non-contiguous or non-minimal phrases because reordering them often involves complicated transductions which could hamper the performance of our learning algorithm. 3",
"cite_spans": [
{
"start": 279,
"end": 303,
"text": "(Carroll and Rooth, 1998",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PETs and the Hidden Treebank",
"sec_num": "2"
},
{
"text": "P12 1 P12 2 P12 1 P12 2 P21 1 P21 2 P3142 1 P3142 2 P3142 3 P3142 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PETs and the Hidden Treebank",
"sec_num": "2"
},
{
"text": "Unaligned words Next we describe the use of the factorization of permutations into PET forests for training a PCFG model. But first we need to extend the PETs to allow for unaligned words. An unaligned word is joined with a neighboring phrase to the left or the right, depending on the source language properties (e.g., whether the language is head-initial or -final (Chomsky, 1970) ). Our experiments use English as source language (head-initial), so the unaligned words are joined to phrases to their right. This modifies a PET by adding a new binary branching node \u00b5 (dominating the unaligned word and the phrase it is joined to) which is labeled with a dedicated nonterminal: P 01 if the unaligned word joins to the right and P 10 if it joins to the left.",
"cite_spans": [
{
"start": 367,
"end": 382,
"text": "(Chomsky, 1970)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PETs and the Hidden Treebank",
"sec_num": "2"
},
{
"text": "We decompose the permutation \u03c0 m into a forest of permutation trees P EF (\u03c0 m ) in O(n 3 ), following algorithms in (Zhang et al., 2008; Zhang and Gildea, 2007) with trivial modifications. Each PET \u2206 \u2208 P EF (\u03c0 m ) is a different bracketing (differing in binary branching structure only). We consider the bracketing hidden in the latent treebank, and apply unsupervised learning to induce a distribution over possible bracketings. Our probability model starts from the joint probability of a sequence of minimal phrases s m and a permutation \u03c0 m over it. This demands summing over all PETs \u2206 in the forest P EF (\u03c0 m ), and for every PET also over all its label splits, which are given by the grammar derivations d:",
"cite_spans": [
{
"start": 116,
"end": 136,
"text": "(Zhang et al., 2008;",
"ref_id": "BIBREF41"
},
{
"start": 137,
"end": 160,
"text": "Zhang and Gildea, 2007)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probability model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (s m , \u03c0 m ) = \u2206\u2208P EF (\u03c0m) d\u2208\u2206 P (d, s m )",
"eq_num": "(1)"
}
],
"section": "Probability model",
"sec_num": "3.1"
},
{
"text": "The probability of a derivation d is a product of probabilities of all the rules r that build it:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (s m , \u03c0 m ) = \u2206\u2208P EF (\u03c0m) d\u2208\u2206 r\u2208d P (r)",
"eq_num": "(2)"
}
],
"section": "Probability model",
"sec_num": "3.1"
},
{
"text": "3 Which differs from (Quirk and Menezes, 2006) .",
"cite_spans": [
{
"start": 21,
"end": 46,
"text": "(Quirk and Menezes, 2006)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probability model",
"sec_num": "3.1"
},
{
"text": "As usual, the parameters of this model are the PCFG rule probabilities which are estimated from the latent treebank using EM as explained next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability model",
"sec_num": "3.1"
},
{
"text": "For training the latent PCFG over the latent treebank, we resort to EM (Dempster et al., 1977) which estimates PCFG rule probabilities to maximize the likelihood of the parallel corpus instances. Computing expectations for EM is done efficiently using Inside-Outside (Lari and Young, 1990) . As in other state splitting models (Matsuzaki et al., 2005) , after splitting the nonterminals, we distribute the probability uniformly over the new rules, and we add to each new rule some random noise to break the symmetry. We split the non-terminals only once as in (Matsuzaki et al., 2005 ) (unlike (Petrov et al., 2006 ). For estimating the distribution for unknown words we replace all words that appear \u2264 3 times with the \"UNKNOWN\" token.",
"cite_spans": [
{
"start": 71,
"end": 94,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF9"
},
{
"start": 267,
"end": 289,
"text": "(Lari and Young, 1990)",
"ref_id": "BIBREF21"
},
{
"start": 327,
"end": 351,
"text": "(Matsuzaki et al., 2005)",
"ref_id": "BIBREF24"
},
{
"start": 560,
"end": 583,
"text": "(Matsuzaki et al., 2005",
"ref_id": "BIBREF24"
},
{
"start": 584,
"end": 614,
"text": ") (unlike (Petrov et al., 2006",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Splits on Latent Treebank",
"sec_num": "3.2"
},
{
"text": "We use CKY+ (Chappelier and Rajman, 1998) to parse a source sentence s into a forest using the learned split PCFG. Unfortunately, computing the most-likely permutation (or alternatively\u015b) as in",
"cite_spans": [
{
"start": 12,
"end": 41,
"text": "(Chappelier and Rajman, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "argmax \u03c0\u2208\u03a0 \u2206\u2208P EF (\u03c0) d\u2208\u2206 P (d, \u03c0 m )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "from a lattice of permutations \u03a0 using a PCFG is NP-complete (Sima'an, 2002) . Existing techniques, like variational decoding or Minimum-Bayes Risk (MBR), used for minimizing loss over trees as in (Petrov and Klein, 2007) , are not directly applicable here. Hence, we opt for minimizing the risk of making an error under a loss function over permutations using the MBR decision rule (Kumar and Byrne, 2004) :",
"cite_spans": [
{
"start": 61,
"end": 76,
"text": "(Sima'an, 2002)",
"ref_id": "BIBREF33"
},
{
"start": 197,
"end": 221,
"text": "(Petrov and Klein, 2007)",
"ref_id": "BIBREF27"
},
{
"start": 383,
"end": 406,
"text": "(Kumar and Byrne, 2004)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0 = argmin \u03c0 \u03c0r Loss(\u03c0, \u03c0 r )P (\u03c0 r )",
"eq_num": "(3)"
}
],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "The loss function we minimize is Kendall \u03c4 (Birch and Osborne, 2011; Isozaki et al., 2010a) which is a ratio of wrongly ordered pairs of words (including gapped pairs) to the total number of pairs. We do Monte Carlo sampling of 10000 derivations from the chart of the s and then find the least risky permutation in terms of this loss. We sample from the true distribution by sampling edges recursively using their inside probabilities. An empirical distribution over permutations P (\u03c0) is given by the relative frequency of \u03c0 in the sample. With large samples it is hard to efficiently compute expected Kendall \u03c4 loss for each sampled hypothesis. For sentence of length k and sample of size n the complexity of a naive algorithm is O(n 2 k 2 ). Computing Kendall \u03c4 alone takes O(k 2 ). We use the fact that Kendall \u03c4 decomposes as a linear function over all skip-bigrams b that could be built for any permutation of length k:",
"cite_spans": [
{
"start": 43,
"end": 68,
"text": "(Birch and Osborne, 2011;",
"ref_id": "BIBREF1"
},
{
"start": 69,
"end": 91,
"text": "Isozaki et al., 2010a)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "Kendall(\u03c0, \u03c0 r ) = b 1 \u2212 \u03b4(\u03c0, b) k(k\u22121) 2 \u03b4(\u03c0 r , b) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "Here \u03b4 returns 1 if permutation \u03c0 contains the skip bigram b, otherwise it returns 0. With this decomposition we can use the method from (DeNero et al., 2009) to efficiently compute the MBR hypothesis. Combining Equations 3 and 4 we get:",
"cite_spans": [
{
"start": 137,
"end": 158,
"text": "(DeNero et al., 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "\u03c0 = argmin \u03c0 \u03c0r b 1 \u2212 \u03b4(\u03c0, b) k(k\u22121) 2 \u03b4(\u03c0 r , b)P (\u03c0 r ) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "We can move the summation inside and reformulate the expected Kendall \u03c4 loss as expectation over the skip-bigrams of the permutation. This means we need to pass through the sampled list only twice: (1) to compute expectations over skip bigrams and (2) to compute expected loss of each sampled permutation. The time complexity is O(nk 2 ) which is quite fast in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "We conduct experiments with three baselines:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 Baseline A: No preordering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 Baseline B: Rule based preordering (Isozaki et al., 2010b) , which first obtains an HPSG parse tree using Enju parser 4 and after that swaps the children by moving the syntactic head to the final position to account for different head orientation in English and Japanese.",
"cite_spans": [
{
"start": 37,
"end": 60,
"text": "(Isozaki et al., 2010b)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 Baseline C: LADER (Neubig et al., 2012) : latent variable preordering that is based on ITG and large-margin training with latent variables. We used LADER in standard settings without any linguistic features (POS tags or syntactic trees).",
"cite_spans": [
{
"start": 20,
"end": 41,
"text": "(Neubig et al., 2012)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "And we test four variants of our model: We test these models on English-Japanese NTCIR-8 Patent Translation (PATMT) Task. For tuning we use all NTCIR-7 dev sets and for testing the test set from NTCIR-9 from both directions. All used data was tokenized (English with Moses tokenizer and Japanese with KyTea 5 ) and filtered for sentences between 4 and 50 words. A subset of this data is used for training the Reordering Grammar, obtained by filtering out sentences that have prime permutations of arity > 5, and for the ITG version arity > 2. Baseline C was trained on 600 sentences because training is prohibitively slow. The Reordering Grammar was trained for 10 iterations of EM on train RG data. We use 30 splits for binary non-terminals and 3 for non-binary. Training on this dataset takes 2 days and parsing tuning and testing set without any pruning takes 11 and 18 hours respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 RG",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We test how well our model predicts gold reorderings before translation by training the alignment model using MGIZA++ 6 on the training corpus and using it to align the test corpus. Gold reorderings for the test corpus are obtained by sorting words by their average target position and (unaligned words follow their right neighboring word). We use Kendall \u03c4 score for evaluation (note the difference with Section 3.3 where we defined it as a loss function). Table 2 shows that our models outperform all baselines on this task. The only strange result here is that rule-based preordering obtains a lower score than no preordering, which might be an artifact of the Enju parser changing the tokenization of its input, so the Kendall \u03c4 of this system might not really reflect the real quality of the preordering. All other systems use the same tokenization. ",
"cite_spans": [],
"ref_spans": [
{
"start": 458,
"end": 465,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Intrinsic evaluation",
"sec_num": "4.1"
},
{
"text": "The reordered output of all the mentioned baselines and versions of our model are translated with phrase-based MT system (Koehn et al., 2007) (distortion limit set to 6 with distance based reordering model) that is trained on gold preordering of the training data 7\u015b \u2212 t. The only exception is Baseline A which is trained on original s \u2212 t.",
"cite_spans": [
{
"start": 121,
"end": 141,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic evaluation in MT",
"sec_num": "4.2"
},
{
"text": "We use a 5-gram language model trained with KenLM 8 , tune 3 times with kb-mira (Cherry and Foster, 2012) to account for tuner instability and evaluated using Multeval 9 for statistical significance on 3 metrics: BLEU (Papineni et al., 2002) , METEOR (Denkowski and Lavie, 2014) and TER (Snover et al., 2006) . We additionally report RIBES score (Isozaki et al., 2010a ) that concentrates on word order more than other metrics.",
"cite_spans": [
{
"start": 80,
"end": 105,
"text": "(Cherry and Foster, 2012)",
"ref_id": "BIBREF4"
},
{
"start": 218,
"end": 241,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF26"
},
{
"start": 251,
"end": 278,
"text": "(Denkowski and Lavie, 2014)",
"ref_id": "BIBREF13"
},
{
"start": 287,
"end": 308,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF34"
},
{
"start": 346,
"end": 368,
"text": "(Isozaki et al., 2010a",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic evaluation in MT",
"sec_num": "4.2"
},
{
"text": "Single or all PETs? In Table 3 we see that using all PETs during training makes a big impact on performance. Only the all PETs variants 7 Earlier work on preordering applies the preordering model to the training data to obtain a parallel corpus of guessed\u015b \u2212 t pairs, which are the word re-aligned and then used for training the back-end MT system (Khalilov and Sima'an, 2011) . We skip this, we take the risk of mismatch between the preordering and the back-end system, but this simplifies training and saves a good amount of training time.",
"cite_spans": [
{
"start": 348,
"end": 376,
"text": "(Khalilov and Sima'an, 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Extrinsic evaluation in MT",
"sec_num": "4.2"
},
{
"text": "8 http://kheafield.com/code/kenlm/ 9 https://github.com/jhclark/multeval All PETs or binary only? RG PET-forest performs significantly better than RG ITG-forest (p < 0.05).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic evaluation in MT",
"sec_num": "4.2"
},
{
"text": "System BLEU \u2191 METEOR \u2191 TER \u2193 RIBES \u2191 A N o",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic evaluation in MT",
"sec_num": "4.2"
},
{
"text": "Non-ITG reordering operators are predicted rarely (in only 99 sentences of the test set), but they make a difference, because these operators often appear high in the predicted PET. Furthermore, having these operators during training might allow for better fit to the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic evaluation in MT",
"sec_num": "4.2"
},
{
"text": "How much reordering is resolved by the Reordering Grammar? Obviously, completely factorizing out the reordering from the translation process is impossible because reordering depends to a certain degree on target lexical choice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic evaluation in MT",
"sec_num": "4.2"
},
{
"text": "To quantify the contribution of Reordering Grammar, we tested decoding with different distortion limit values in the SMT system. We compare the phrase-based (PB) system with distance based cost function for reordering (Koehn et al., 2007) with and without preordering. Figure 3 shows that Reordering Grammar gives substantial performance improvements at all distortion limits (both BLEU and RIBES). RG PET-forest is less sensitive to changes in decoder distortion limit than standard PBSMT. The perfor- Figure 3 : Distortion effect on BLEU and RIBES mance of RG PET-forest varies only by 1.1 BLEU points while standard PBSMT by 4.3 BLEU points. Some local reordering in the decoder seems to help RG PET-forest but large distortion limits seem to degrade the preordering choice. This shows also that the improved performance of RG PET-forest is not only a result of efficiently exploring the full space of permutations, but also a result of improved scoring of permutations. Does the improvement remain for a decoder with MSD reordering model? We compare the RG PET-forest preordered model against a decoder that uses the strong MSD model (Tillmann, 2004; Koehn et al., 2007) . (BLEU 27.8 ) by (BLEU 1.8), whereas the difference between these systems as back-ends to Reordering Grammar (respectively BLEU 32.4 and 32.0) is far smaller (0.4 BLEU). This suggests that a major share of reorderings can be handled well by preordering without conditioning on target lexical choice. Furthermore, this shows that RG PET-forest preordering is not very sensitive to the decoder's reordering model.",
"cite_spans": [
{
"start": 218,
"end": 238,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 1138,
"end": 1154,
"text": "(Tillmann, 2004;",
"ref_id": "BIBREF35"
},
{
"start": 1155,
"end": 1174,
"text": "Koehn et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 1177,
"end": 1187,
"text": "(BLEU 27.8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 269,
"end": 277,
"text": "Figure 3",
"ref_id": null
},
{
"start": 503,
"end": 511,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Extrinsic evaluation in MT",
"sec_num": "4.2"
},
{
"text": "Comparison to a Hierarchical model (Hiero). Hierarchical preordering is not intended for a hierarchical model as Hiero (Chiang, 2005 ). Yet, here we compare our preordering system (PB MSD+RG) to Hiero for completeness, while we should keep in mind that Hiero's reordering model has access to much richer training data. We will discuss these differences shortly. Table 4 shows that the difference in BLEU is not statistically significant, but there is more difference in METEOR and TER. RIBES, which concentrates more on reordering, prefers Reordering Grammar over Hiero. It is somewhat surprising that a preordering model combined with a phrase-based model succeeds to rival Hiero's performance on English-Japanese. Especially when looking at the differences between the two:",
"cite_spans": [
{
"start": 119,
"end": 132,
"text": "(Chiang, 2005",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 362,
"end": 369,
"text": "Table 4",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Extrinsic evaluation in MT",
"sec_num": "4.2"
},
{
"text": "1. Reordering Grammar uses only minimal phrases, while Hiero uses composite (longer) phrases which encapsulate internal reorderings, but also non-contiguous phrases. 2. Hiero conditions its reordering on the lexical target side, whereas the Reordering Grammar does not (by definition). 3. Hiero uses a range of features, e.g., a language model, while Reordering Grammar is a mere generative PCFG. The advantages of Hiero can be brought to bear upon Reordering Grammar by reformulating it as a discriminative model. Which structure is learned? Figure 4 shows an example PET output showing how our model learns: (1) that the article \"the\" has no equivalent in Japanese, (2) that verbs go after their object, (3) to use postpositions instead of prepositions, and (4) to correctly group certain syntactic units, e.g. NPs and VPs.",
"cite_spans": [],
"ref_spans": [
{
"start": 543,
"end": 551,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Extrinsic evaluation in MT",
"sec_num": "4.2"
},
{
"text": "The majority of work on preordering is based on syntactic parse trees, e.g., (Lerner and Petrov, 2013; Khalilov and Sima'an, 2011; Xia and Mccord, 2004 ). Here we concentrate on work that has common aspects with this work. Neubig et Tromble and Eisner (2009) use ITG but do not train the grammar. They only use it to constrain the local search. DeNero and Uszkoreit (2011) present two separate consecutive steps for unsupervised induction of hierarchical structure (ITG) and the induction of a reordering function over it. In contrast, here we learn both the structure and the reordering function simultaneously. Furthermore, at test time, our inference with MBR over a measure of permutation (Kendall) allows exploiting both structure and reordering weights for inference, whereas test-time inference in (DeNero and Uszkoreit, 2011) is also a two step process -the parser forwards to the next stage the best parse. Dyer and Resnik (2010) treat reordering as a latent variable and try to sum over all derivations that lead not only to the same reordering but also to the same translation. In their work they consider all permutations allowed by a given syntactic tree. Saers et al (2012) induce synchronous grammar for translation by splitting the non-terminals, but unlike our approach they split generic nonterminals and not operators. Their most expressive grammar covers only binarizable permutations. The decoder that uses this model does not try to sum over many derivations that have the same yield. They do not make independence assumption like our \"unary trick\" which is probably the reason they do not split more than 8 times. They do not compare their results to any other SMT system and test on a very small dataset. Saluja et al (2014) attempts inducing a refined Hiero grammar (latent synchronous CFG) from Normalized Decomposition Trees (NDT) (Zhang et al., 2008) . While there are similarities with the present work, there are major differences. On the similarity side, NDTs are decomposing alignments in ways similar to PETs, and both Saluja's and our models refine the labels on the nodes of these decompositions. However, there are major differences between the two:",
"cite_spans": [
{
"start": 77,
"end": 102,
"text": "(Lerner and Petrov, 2013;",
"ref_id": "BIBREF22"
},
{
"start": 103,
"end": 130,
"text": "Khalilov and Sima'an, 2011;",
"ref_id": "BIBREF18"
},
{
"start": 131,
"end": 151,
"text": "Xia and Mccord, 2004",
"ref_id": "BIBREF39"
},
{
"start": 233,
"end": 258,
"text": "Tromble and Eisner (2009)",
"ref_id": "BIBREF36"
},
{
"start": 916,
"end": 938,
"text": "Dyer and Resnik (2010)",
"ref_id": "BIBREF14"
},
{
"start": 1169,
"end": 1187,
"text": "Saers et al (2012)",
"ref_id": "BIBREF31"
},
{
"start": 1729,
"end": 1748,
"text": "Saluja et al (2014)",
"ref_id": "BIBREF32"
},
{
"start": 1858,
"end": 1878,
"text": "(Zhang et al., 2008)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "\u2022 Our model is completely monolingual and unlexicalized (does not condition its reordering on the translation) in contrast with the Latent SCFG used in (Saluja et al., 2014 ), \u2022 Our Latent PCFG label splits are defined as refinements of prime permutations, i.e., specifically designed for learning reordering, whereas (Saluja et al., 2014) aims at learning label splitting that helps predicting NDTs from source sentences, \u2022 Our model exploits all PETs and all derivations, both during training (latent treebank) and during inferences. In (Saluja et al., 2014) only left branching NDT derivations are used for learning the model. \u2022 The training data used by (Saluja et al., 2014) is about 60 times smaller in number of words than the data used here; the test set of (Saluja et al., 2014 ) also consists of far shorter sentences where reordering could be less crucial.",
"cite_spans": [
{
"start": 152,
"end": 172,
"text": "(Saluja et al., 2014",
"ref_id": "BIBREF32"
},
{
"start": 318,
"end": 339,
"text": "(Saluja et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 539,
"end": 560,
"text": "(Saluja et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 658,
"end": 679,
"text": "(Saluja et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 766,
"end": 786,
"text": "(Saluja et al., 2014",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "A related work with a similar intuition is presented in (Maillette de Buy Wenniger and Sima'an, 2014) , where nodes of a tree structure similar to PETs are labeled with reordering patterns obtained by factorizing word alignments into Hierarchical Alignment Trees. These patterns are used for labeling the standard Hiero grammar. Unlike this work, the labels extracted by (Maillette de Buy Wenniger and Sima'an, 2014) are clustered manually into less than a dozen labels without the possibility of fitting the labels to the training data.",
"cite_spans": [
{
"start": 87,
"end": 101,
"text": "Sima'an, 2014)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "We present a generative Reordering PCFG model learned from latent treebanks over PETs obtained by factorizing permutations over minimal phrase pairs. Our Reordering PCFG handles non-ITG reordering patterns (up to 5-ary branching) and it works with all PETs that factorize a permutation (rather than a single PET). To the best of our knowledge this is the first time both extensions are shown to improve performance. The empirical results on English-Japanese show that (1) when used for preordering, the Reordering PCFG helps particularly with relieving the phrase-based model from long range reorderings, (2) combined with a state-of-the-art phrase model, Reordering PCFG shows performance not too different from Hiero, supporting the common wisdom of factorizing long range reordering outside the decoder, (3) Reordering PCFG generates derivations that seem to coincide well with linguistically-motivated reordering patterns for English-Japanese. There are various direction we would like to explore, the most obvious of which are integrating the learned reordering with other feature functions in a discriminative setting, and extending the model to deal with non-contiguous minimal phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "All PETs for the same permutation share the same set of prime permutations but differ only in bracketing structure(Zhang and Gildea, 2007).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "After applying the unary trick, we add a constraint on splitting: all nonterminals on an n-ary branching rule must be split simultaneously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.nactem.ac.uk/enju/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.phontron.com/kytea/ 6 http://www.kyloo.net/software/doku.php/mgiza:overview",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by STW grant nr. 12271 and NWO VICI grant nr. 277-89-002. We thank Wilker Aziz for comments on earlier version of the paper and discussions about MBR and sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Simple Permutations and Pattern Restricted Permutations",
"authors": [
{
"first": "H",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Mike",
"middle": [
"D"
],
"last": "Albert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Atkinson",
"suffix": ""
}
],
"year": 2005,
"venue": "Discrete Mathematics",
"volume": "300",
"issue": "1-3",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael H. Albert and Mike D. Atkinson. 2005. Sim- ple Permutations and Pattern Restricted Permuta- tions. Discrete Mathematics, 300(1-3):1-15.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Reordering Metrics for MT",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Birch and Miles Osborne. 2011. Reorder- ing Metrics for MT. In Proceedings of the Associ- ation for Computational Linguistics, Portland, Ore- gon, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Valence Induction with a Head-Lexicalized PCFG",
"authors": [
{
"first": "Glenn",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Mats",
"middle": [],
"last": "Rooth",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of Third Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glenn Carroll and Mats Rooth. 1998. Valence Induc- tion with a Head-Lexicalized PCFG. In In Proceed- ings of Third Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Generalized CYK Algorithm for Parsing Stochastic CFG",
"authors": [
{
"first": "Jean-C\u00e9dric",
"middle": [],
"last": "Chappelier",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Rajman",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the First Workshop on Tabulation in Parsing and Deduction",
"volume": "",
"issue": "",
"pages": "133--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-C\u00e9dric Chappelier and Martin Rajman. 1998. A Generalized CYK Algorithm for Parsing Stochastic CFG. In Proceedings of the First Workshop on Tab- ulation in Parsing and Deduction, pages 133-137.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Batch Tuning Strategies for Statistical Machine Translation",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT '12",
"volume": "",
"issue": "",
"pages": "427--436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and George Foster. 2012. Batch Tun- ing Strategies for Statistical Machine Translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL HLT '12, pages 427-436.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Hierarchical Phrase-Based Model for Statistical Machine Translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A Hierarchical Phrase-Based Model for Statistical Machine Translation. In Pro- ceedings of the 43rd Annual Meeting of the Associa- tion for Computational Linguistics (ACL'05), pages 263-270, Ann Arbor, Michigan, June.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Remarks on Nominalization",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1970,
"venue": "Readings in English Transformational Grammar",
"volume": "",
"issue": "",
"pages": "184--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky. 1970. Remarks on Nominalization. In Roderick A. Jacobs and Peter S. Rosenbaum, ed- itors, Readings in English Transformational Gram- mar, pages 184-221. Ginn, Boston.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Clause Restructuring for Statistical Machine Translation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Ivona",
"middle": [],
"last": "Ku\u010derov\u00e1",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05",
"volume": "",
"issue": "",
"pages": "531--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins, Philipp Koehn, and Ivona Ku\u010derov\u00e1. 2005. Clause Restructuring for Statistical Machine Translation. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguis- tics, ACL '05, pages 531-540.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2003,
"venue": "Comput. Linguist",
"volume": "29",
"issue": "4",
"pages": "589--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2003. Head-Driven Statistical Mod- els for Natural Language Parsing. Comput. Lin- guist., 29(4):589-637, December.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Maximum Likelihood from Incomplete Data via the EM Algorithm",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "JOURNAL OF THE ROYAL STA-TISTICAL SOCIETY",
"volume": "39",
"issue": "1",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum Likelihood from Incomplete Data via the EM Algorithm. JOURNAL OF THE ROYAL STA- TISTICAL SOCIETY, SERIES B, 39(1):1-38.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Inducing Sentence Structure from Parallel Corpora for Reordering",
"authors": [
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "193--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John DeNero and Jakob Uszkoreit. 2011. Inducing Sentence Structure from Parallel Corpora for Re- ordering. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing, EMNLP '11, pages 193-203.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Fast Consensus Decoding over Translation Forests",
"authors": [
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John DeNero, David Chiang, and Kevin Knight. 2009. Fast Consensus Decoding over Translation Forests. In Proceedings of the Joint Conference of the 47th",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"authors": [],
"year": null,
"venue": "",
"volume": "2",
"issue": "",
"pages": "567--575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 -Volume 2, ACL '09, pages 567-575.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Meteor Universal: Language Specific Translation Evaluation for Any Target Language",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the EACL 2014 Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor Universal: Language Specific Translation Evalua- tion for Any Target Language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Trans- lation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Context-free Reordering, Finite-state Translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10",
"volume": "",
"issue": "",
"pages": "858--866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer and Philip Resnik. 2010. Context-free Re- ordering, Finite-state Translation. In Human Lan- guage Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, pages 858- 866.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic Evaluation of Translation Quality for Distant Language Pairs",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "944--952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010a. Automatic Evaluation of Translation Quality for Distant Lan- guage Pairs. In Proceedings of the 2010 Confer- ence on Empirical Methods in Natural Language Processing, EMNLP '10, pages 944-952.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Head Finalization: A Simple Reordering Rule for SOV Languages",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, WMT '10",
"volume": "",
"issue": "",
"pages": "244--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Isozaki, Katsuhito Sudoh, Hajime Tsukada, and Kevin Duh. 2010b. Head Finalization: A Sim- ple Reordering Rule for SOV Languages. In Pro- ceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, WMT '10, pages 244-251.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Training a Parser for Machine Translation Reordering",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Katz-Brown",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Talbot",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Ichikawa",
"suffix": ""
},
{
"first": "Masakazu",
"middle": [],
"last": "Seno",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "183--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Katz-Brown, Slav Petrov, Ryan McDon- ald, Franz Och, David Talbot, Hiroshi Ichikawa, Masakazu Seno, and Hideto Kazawa. 2011. Train- ing a Parser for Machine Translation Reordering. In Proceedings of the 2011 Conference on Empiri- cal Methods in Natural Language Processing, pages 183-192, Edinburgh, Scotland, UK., July. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Context-Sensitive Syntactic Source-Reordering by Statistical Transduction",
"authors": [
{
"first": "Maxim",
"middle": [],
"last": "Khalilov",
"suffix": ""
},
{
"first": "Khalil",
"middle": [],
"last": "Sima'an",
"suffix": ""
}
],
"year": 2011,
"venue": "IJCNLP",
"volume": "",
"issue": "",
"pages": "38--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maxim Khalilov and Khalil Sima'an. 2011. Context- Sensitive Syntactic Source-Reordering by Statistical Transduction. In IJCNLP, pages 38-46.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Moses: Open Source Toolkit for Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Ses- sions, ACL '07, pages 177-180.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Minimum Bayes-Risk Decoding for Statistical Machine Translation",
"authors": [
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Joint Conference on Human Language Technologies and the Annual Meeting of the North American Chapter of the Association of Computational Linguistics (HLT-NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shankar Kumar and William Byrne. 2004. Mini- mum Bayes-Risk Decoding for Statistical Machine Translation. In Proceedings of the Joint Conference on Human Language Technologies and the Annual Meeting of the North American Chapter of the Asso- ciation of Computational Linguistics (HLT-NAACL).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The Estimation of Stochastic Context-Free Grammars using the Inside-Outside Algorithm",
"authors": [
{
"first": "K",
"middle": [],
"last": "Lari",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Speech and Language",
"volume": "4",
"issue": "",
"pages": "35--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Lari and S. J. Young. 1990. The Estimation of Stochastic Context-Free Grammars using the Inside- Outside Algorithm. Computer Speech and Lan- guage, 4:35-56.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Source-Side Classifier Preordering for Machine Translation",
"authors": [
{
"first": "Uri",
"middle": [],
"last": "Lerner",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "513--523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Uri Lerner and Slav Petrov. 2013. Source-Side Clas- sifier Preordering for Machine Translation. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 513- 523. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bilingual Markov Reordering Labels for Hierarchical SMT",
"authors": [],
"year": 2014,
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "138--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon Maillette de Buy Wenniger and Khalil Sima'an. 2014. Bilingual Markov Reordering Labels for Hi- erarchical SMT. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 138-147, Doha, Qatar, October.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Probabilistic CFG with Latent Annotations",
"authors": [
{
"first": "Takuya",
"middle": [],
"last": "Matsuzaki",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05",
"volume": "",
"issue": "",
"pages": "75--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takuya Matsuzaki, Yusuke Miyao, and Jun'ichi Tsujii. 2005. Probabilistic CFG with Latent Annotations. In Proceedings of the 43rd Annual Meeting on As- sociation for Computational Linguistics, ACL '05, pages 75-82.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Inducing a Discriminative Parser to Optimize Machine Translation Reordering",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2012,
"venue": "Conference on Empirical Methods in Natural Language Processing and Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "843--853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Taro Watanabe, and Shinsuke Mori. 2012. Inducing a Discriminative Parser to Optimize Machine Translation Reordering. In Conference on Empirical Methods in Natural Language Processing and Natural Language Learning (EMNLP-CoNLL), pages 843-853, Jeju, Korea, July.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "BLEU: A Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Com- putational Linguistics, ACL '02, pages 311-318.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Improved Inference for Unlexicalized Parsing",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov and Dan Klein. 2007. Improved Inference for Unlexicalized Parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computa- tional Linguistics; Proceedings of the Main Confer- ence, pages 404-411, Rochester, New York, April.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning Accurate, Compact, and Interpretable Tree Annotation",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Romain",
"middle": [],
"last": "Thibaux",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "433--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning Accurate, Compact, and Interpretable Tree Annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Associa- tion for Computational Linguistics, pages 433-440, Sydney, Australia, July. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Inducing Head-Driven PCFGs with Latent Heads: Refining a Tree-Bank Grammar for Parsing",
"authors": [
{
"first": "Detlef",
"middle": [],
"last": "Prescher",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Detlef Prescher. 2005. Inducing Head-Driven PCFGs with Latent Heads: Refining a Tree-Bank Grammar for Parsing. In In ECML05.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Do we need phrases? Challenging the conventional wisdom in Statistical Machine Translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Arul",
"middle": [],
"last": "Menezes",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT-NAACL 2006. ACL/SIGPARSE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk and Arul Menezes. 2006. Do we need phrases? Challenging the conventional wisdom in Statistical Machine Translation. In Proceedings of HLT-NAACL 2006. ACL/SIGPARSE, May.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "From Finite-State to Inversion Transductions: Toward Unsupervised Bilingual Grammar Induction",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Saers",
"suffix": ""
},
{
"first": "Karteek",
"middle": [],
"last": "Addanki",
"suffix": ""
},
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2012,
"venue": "COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers",
"volume": "",
"issue": "",
"pages": "2325--2340",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Saers, Karteek Addanki, and Dekai Wu. 2012. From Finite-State to Inversion Transductions: To- ward Unsupervised Bilingual Grammar Induction. In COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Con- ference: Technical Papers, 8-15 December 2012, Mumbai, India, pages 2325-2340.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Latent-Variable Synchronous CFGs for Hierarchical Translation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Saluja",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "S",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Saluja, C. Dyer, and S. B. Cohen. 2014. Latent- Variable Synchronous CFGs for Hierarchical Trans- lation. Proceedings of EMNLP.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Computational Complexity of Probabilistic Disambiguation",
"authors": [
{
"first": "Khalil",
"middle": [],
"last": "Sima",
"suffix": ""
},
{
"first": "'",
"middle": [],
"last": "An",
"suffix": ""
}
],
"year": 2002,
"venue": "Grammars",
"volume": "5",
"issue": "2",
"pages": "125--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khalil Sima'an. 2002. Computational Complex- ity of Probabilistic Disambiguation. Grammars, 5(2):125-151.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "223--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annota- tion. In In Proceedings of Association for Machine Translation in the Americas, pages 223-231.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A Unigram Orientation Model for Statistical Machine Translation",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of HLT-NAACL 2004: Short Papers, HLT-NAACL-Short '04",
"volume": "",
"issue": "",
"pages": "101--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Tillmann. 2004. A Unigram Orientation Model for Statistical Machine Translation. In Pro- ceedings of HLT-NAACL 2004: Short Papers, HLT- NAACL-Short '04, pages 101-104.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Learning Linear Ordering Problems for Better Translation",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1007--1016",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Tromble and Jason Eisner. 2009. Learning Linear Ordering Problems for Better Translation. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1007-1016, Singapore, August.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Empirical lower bounds on the complexity of translational equivalence",
"authors": [
{
"first": "Sonjia",
"middle": [],
"last": "Benjamin Wellington",
"suffix": ""
},
{
"first": "I",
"middle": [
"Dan"
],
"last": "Waxmonsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL 2006",
"volume": "",
"issue": "",
"pages": "977--984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Wellington, Sonjia Waxmonsky, and I. Dan Melamed. 2006. Empirical lower bounds on the complexity of translational equivalence. In In Pro- ceedings of ACL 2006, pages 977-984.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Comput. Linguist",
"volume": "23",
"issue": "3",
"pages": "377--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Cor- pora. Comput. Linguist., 23(3):377-403, Septem- ber.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Improving a Statistical MT System with Automatically Learned Rewrite Patterns",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Mccord",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Coling",
"volume": "",
"issue": "",
"pages": "508--514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Xia and Michael Mccord. 2004. Improving a Statistical MT System with Automatically Learned Rewrite Patterns. In Proceedings of Coling 2004, pages 508-514, Geneva, Switzerland, August. COL- ING.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Factorization of Synchronous Context-Free Grammars in Linear Time",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2007,
"venue": "NAACL Workshop on Syntax and Structure in Statistical Translation (SSST)",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang and Daniel Gildea. 2007. Factorization of Synchronous Context-Free Grammars in Linear Time. In NAACL Workshop on Syntax and Structure in Statistical Translation (SSST), pages 25-32.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Extracting Synchronous Grammar Rules From Word-Level Alignments in Linear Time",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics (COLING-08)",
"volume": "",
"issue": "",
"pages": "1081--1088",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang, Daniel Gildea, and David Chiang. 2008. Extracting Synchronous Grammar Rules From Word-Level Alignments in Linear Time. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING-08), pages 1081-1088, Manchester, UK.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Possible Permutation Trees (PETs) for one sentence pair",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Permutation Tree with unary trick 3 Details of Latent Reordering PCFG",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "\u03b4(\u03c0, b)) \u03c0r \u03b4(\u03c0 r , b)P (\u03c0 r ) \u03b4(\u03c0, b))E P (\u03c0r) \u03b4(\u03c0 r , b) b)E P (\u03c0r) \u03b4(\u03c0 r , b) (8)",
"type_str": "figure",
"num": null
},
"FIGREF4": {
"uris": null,
"text": "Example parse of English sentence that predicts reordering for English-Japanese al (2012) trains a latent non-probabilistic discriminative model for preordering as an ITG-like grammar limited to binarizable permutations.",
"type_str": "figure",
"num": null
},
"TABREF1": {
"content": "<table/>",
"text": "left -only canonical left branching PET \u2022 RG right -only canonical right branching PET \u2022 RG ITG-forest -all PETs that are binary (ITG) \u2022 RG PET-forest -all PETs.",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td>corpus</td><td>#sents</td><td colspan=\"2\">#words #words source target</td></tr><tr><td>train RGPET</td><td colspan=\"2\">786k 21M</td><td>-</td></tr><tr><td>train RGITG</td><td colspan=\"2\">783k 21M</td><td>-</td></tr><tr><td>train LADER</td><td>600</td><td>15k</td><td>-</td></tr><tr><td colspan=\"4\">train translation 950k 25M 30M</td></tr><tr><td>tune translation</td><td>2k</td><td>55K</td><td>66K</td></tr><tr><td>test translation</td><td>3k</td><td>78K</td><td>93K</td></tr></table>",
"text": "shows the sizes of data used.",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"text": "",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"text": "",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF6": {
"content": "<table><tr><td colspan=\"2\">preord. 27.8</td><td>48.9</td><td>59.2</td><td>68.29</td></tr><tr><td colspan=\"2\">B Rule based 29.6</td><td>48.7</td><td>59.2</td><td>71.12</td></tr><tr><td colspan=\"2\">C LADER 31.1</td><td>50.5</td><td>56.0</td><td>74.29</td></tr><tr><td>RGleft</td><td colspan=\"2\">31.2 AB 50.5 AB</td><td>56.3 AB C</td><td>74.45</td></tr><tr><td>RGright</td><td>31.4</td><td/><td/></tr></table>",
"text": "AB 50.5 AB 56.3 AB C 75.29 RGITG-forest 31.6 ABC 50.8 ABC 55.7 ABC 75.29 RGPET-forest 32.0 ABC 51.0 ABC 55.7 ABC 75.62",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF7": {
"content": "<table><tr><td>: Comparison of different preordering</td></tr><tr><td>models. Superscripts A, B and C signify if the sys-</td></tr><tr><td>tem is significantly better (p &lt; 0.05) than the re-</td></tr><tr><td>spective baseline or significantly worse (in which</td></tr><tr><td>case it is a subscript). Significance tests were not</td></tr><tr><td>computed for RIBES. Score is bold if the system</td></tr><tr><td>is significantly better than all the baselines.</td></tr><tr><td>(RG ITG-forest and RG PET-forest ) significantly outper-</td></tr><tr><td>form all baselines. If we are to choose a single</td></tr><tr><td>PET per training instance, then learning RG from</td></tr><tr><td>only left-branching PETs (the one usually cho-</td></tr><tr><td>sen in other work, e.g. (Saluja et al., 2014)) per-</td></tr><tr><td>forms slightly worse than the right-branching PET.</td></tr><tr><td>This is possibly because English is mostly right-</td></tr><tr><td>branching. So even though both PETs describe the</td></tr><tr><td>same reordering, RG right captures reordering over</td></tr><tr><td>English input better than RG left .</td></tr></table>",
"text": "",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF9": {
"content": "<table/>",
"text": "Comparison to MSD and Hiero",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF10": {
"content": "<table><tr><td>shows that using</td></tr><tr><td>Reordering Grammar as front-end to MSD re-</td></tr><tr><td>ordering (full Moses) improves performance by</td></tr><tr><td>2.8 BLEU points. The improvement is confirmed</td></tr><tr><td>by METEOR, TER and RIBES. Our preordering</td></tr><tr><td>model and MSD are complementary -the Re-</td></tr><tr><td>ordering Grammar captures long distance reorder-</td></tr><tr><td>ing, while MSD possibly does better local reorder-</td></tr><tr><td>ings, especially reorderings conditioned on the</td></tr><tr><td>lexical part of translation units.</td></tr><tr><td>Interestingly, the MSD model (BLEU 29.6)</td></tr><tr><td>improves over distance-based reordering</td></tr></table>",
"text": "",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}