Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N13-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:39:29.082387Z"
},
"title": "Improving reordering performance using higher order and structural features",
"authors": [
{
"first": "Mitesh",
"middle": [
"M"
],
"last": "Khapra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Ananthakrishnan",
"middle": [],
"last": "Ramanathan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Visweswariah",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research",
"location": {
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent work has shown that word aligned data can be used to learn a model for reordering source sentences to match the target order. This model learns the cost of putting a word immediately before another word and finds the best reordering by solving an instance of the Traveling Salesman Problem (TSP). However, for efficiently solving the TSP, the model is restricted to pairwise features which examine only a pair of words and their neighborhood. In this work, we go beyond these pairwise features and learn a model to rerank the n-best reorderings produced by the TSP model using higher order and structural features which help in capturing longer range dependencies. In addition to using a more informative set of source side features, we also capture target side features indirectly by using the translation score assigned to a reordering. Our experiments, involving Urdu-English, show that the proposed approach outperforms a state-of-theart PBSMT system which uses the TSP model for reordering by 1.3 BLEU points, and a publicly available state-of-the-art MT system, Hiero, by 3 BLEU points.",
"pdf_parse": {
"paper_id": "N13-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent work has shown that word aligned data can be used to learn a model for reordering source sentences to match the target order. This model learns the cost of putting a word immediately before another word and finds the best reordering by solving an instance of the Traveling Salesman Problem (TSP). However, for efficiently solving the TSP, the model is restricted to pairwise features which examine only a pair of words and their neighborhood. In this work, we go beyond these pairwise features and learn a model to rerank the n-best reorderings produced by the TSP model using higher order and structural features which help in capturing longer range dependencies. In addition to using a more informative set of source side features, we also capture target side features indirectly by using the translation score assigned to a reordering. Our experiments, involving Urdu-English, show that the proposed approach outperforms a state-of-theart PBSMT system which uses the TSP model for reordering by 1.3 BLEU points, and a publicly available state-of-the-art MT system, Hiero, by 3 BLEU points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Handling the differences in word orders between pairs of languages is crucial in producing good machine translation. This is especially true for language pairs such as Urdu-English which have significantly different sentence structures. For example, the typical word order in Urdu is Subject Object Verb whereas the typical word order in English is Subject Verb Object. Phrase based systems (Koehn et al., 2003) rely on a lexicalized distortion model (Al-Onaizan and Papineni, 2006; Tillman, 2004) and the target language model to produce output words in the correct order. This is known to be inadequate when the languages are very different in terms of word order (refer to Table 3 in Section 3).",
"cite_spans": [
{
"start": 391,
"end": 411,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF13"
},
{
"start": 451,
"end": 482,
"text": "(Al-Onaizan and Papineni, 2006;",
"ref_id": "BIBREF0"
},
{
"start": 483,
"end": 497,
"text": "Tillman, 2004)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 676,
"end": 683,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Pre-ordering source sentences while training and testing has become a popular approach in overcoming the word ordering challenge. Most techniques for pre-ordering (Collins et al., 2005; Wang et al., 2007; Ramanathan et al., 2009) depend on a high quality source language parser, which means these methods work only if the source language has a parser (this rules out many languages). Recent work (Visweswariah et al., 2011) has shown that it is possible to learn a reordering model from a relatively small number of hand aligned sentences . This eliminates the need of a source or target parser.",
"cite_spans": [
{
"start": 163,
"end": 185,
"text": "(Collins et al., 2005;",
"ref_id": "BIBREF5"
},
{
"start": 186,
"end": 204,
"text": "Wang et al., 2007;",
"ref_id": "BIBREF25"
},
{
"start": 205,
"end": 229,
"text": "Ramanathan et al., 2009)",
"ref_id": "BIBREF19"
},
{
"start": 396,
"end": 423,
"text": "(Visweswariah et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we build upon the work of Visweswariah et al. (2011) which solves the reordering problem by treating it as an instance of the Traveling Salesman Problem (TSP). They learn a model which assigns costs to all pairs of words in a sentence, where the cost represents the penalty of putting a word immediately preceding another word. The best permutation is found via the chained Lin-Kernighan heuristic for solving a TSP. Since this model relies on solving a TSP efficiently, it cannot capture features other than pairwise features that examine the words and neighborhood for each pair of words in the source sentence. In the remainder of this paper we refer to this model as the TSP model.",
"cite_spans": [
{
"start": 40,
"end": 66,
"text": "Visweswariah et al. (2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our aim is to go beyond this limitation of the TSP model and use a richer set of features instead of using pairwise features only. In particular, we are interested in features that allow us to examine triples of words/POS tags in the candidate reordering per-mutation (this is akin to going from bigram to trigram language models), and also structural features that allow us to examine the properties of the segmentation induced by the candidate permutation. To go beyond the set of features incorporated by the TSP model, we do not solve the search problem which would be NP-hard. Instead, we restrict ourselves to an n-best list produced by the base TSP model and then search in that list. Using a richer set of features, we learn a model to rerank these nbest reorderings. The parameters of the model are learned using the averaged perceptron algorithm. In addition to using a richer set of source side features we also indirectly capture target side features by interpolating the score assigned by our model with the score assigned by the decoder of a MT system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To justify the use of these informative features, we point to the example in Table 1 . Here, the head (driver) of the underlined English Noun Phrase (The driver of the car) appears to the left of the Noun Phrase whereas the head (chaalak {driver}) of the corresponding Urdu Noun Phrase (gaadi {car} ka {of} chaalak {driver}) appears to the right of the Noun Phrase. To produce the correct reordering of the source Urdu sentence the model has to make an unusual choice of putting gaadi {car} before bola {said}. We say this is an unusual choice because the model examines only pairwise features and it is unlikely that it would have seen sentences having the bigram \"car said\". If the exact segmentation of the source sentence was known, then the model could have used the information that the word gaadi {car} appears in a segment whose head is the noun chaalak {driver} and hence its not unusual to put gaadi {car} before bola {said} (because the construct \"NP said\" is not unusual). However, since the segmentation of the source sentence is not known in advance, we use a heuristic (explained later) to find the segmentation induced by a reordering. We then extract features (such as f irst word current segment, end word current segment) to approximate these long range dependencies.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using this richer set of features with Urdu-English as the source language pair, our approach outperforms the following state of the art systems: (i) a PBSMT system which uses TSP model for reordering (by 1.3 BLEU points), (ii) a hierarchical PBSMT system (by 3 BLEU points). The overall Input Urdu:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "fir gaadi ka chaalak kuch bola Gloss: then car of driver said something English:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Then the driver of the car said something. Ref. reordering: fir chaalak ka gaadi bola kuch Table 1 : Example motivating the use of structural features gain is 6.3 BLEU points when compared to a standard PBSMT system which uses a lexicalized distortion model (Al-Onaizan and Papineni, 2006) .",
"cite_spans": [
{
"start": 258,
"end": 289,
"text": "(Al-Onaizan and Papineni, 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 91,
"end": 98,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. In Section 2 we discuss our approach of re-ranking the n-best reorderings produced by the TSP model. This includes a discussion of the model used, the features used and the algorithm used for learning the parameters of the model. It also includes a discussion on the modification to the Chained Lin-Kernighan heuristic to produce n-best reorderings. Next, in Section 3 we describe our experimental setup and report the results of our experiments. In Section 4 we present some discussions based on our study. In section 5 we briefly describe some prior related work. Finally, in Section 6, we present some concluding remarks and highlight possible directions for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Re-ranking using higher order and structural features",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As mentioned earlier, the TSP model (Visweswariah et al., 2011 ) looks only at local features for a word pair (w i , w j ). We believe that for better reordering it is essential to look at higher order and structural features (i.e., features which look at the overall structure of a sentence). The primary reason why Visweswariah et al. (2011) consider only pairwise bigram features is that with higher order features the reordering problem can no longer be cast as a TSP and hence cannot be solved using existing efficient heuristic solvers. However, we do not have to deal with an NP-Hard search problem because instead of considering all possible reorderings we restrict our search space to only the n-best reorderings produced by the base TSP model. Formally, given a set of reorderings, \u03a0 = [\u03c0 1 , \u03c0 2 , \u03c0 3 , ...., \u03c0 n ], for a source sentence s, we are interesting in assigning a score, score(\u03c0), to each of these reorderings and pick the reordering which has the highest score. In this paper, we parametrize this score as:",
"cite_spans": [
{
"start": 36,
"end": 62,
"text": "(Visweswariah et al., 2011",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score(\u03c0) = \u03b8 T \u03c6(\u03c0)",
"eq_num": "(1)"
}
],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where, \u03b8 is the weight vector and \u03c6(\u03c0) is a vector of features extracted from the reordering \u03c0. The aim then is to find,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0 * = arg max \u03c0\u2208\u03a0 score(\u03c0)",
"eq_num": "(2)"
}
],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following sub-sections, we first briefly describe our overall approach towards finding \u03c0 * . Next, we describe our modification to the Lin-Kernighan heuristic for producing n-best outputs for TSP instead of the 1-best output used by (Visweswariah et al., 2011) . We then discuss the features used for re-ranking these n-best outputs, followed by a discussion on the learning algorithm used for estimating the parameters of the model. Finally, we describe how we interpolate the score assigned by our model with the score assigned by the decoder of a SMT engine to indirectly capture target side features.",
"cite_spans": [
{
"start": 240,
"end": 267,
"text": "(Visweswariah et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The training stage of our approach involves two phases : (i) Training a TSP model which will be used to generate n-best reorderings and (ii) Training a re-ranking model using these n-best reorderings. For training both the models we need a collection of sentences where the desired reordering \u03c0 * (x) for each input sentence x is known. These reference orderings are derived from word aligned source-target sentence pairs (see first 4 rows of Figure 1 ). We first divide this word aligned data into N parts and use A \u2212i to denote the alignments leaving out the i-th part. We then train a TSP model M \u2212i using reference reorderings derived from A \u2212i as described in (Visweswariah et al., 2011) . Next, we produce nbest reorderings for the source sentences using the algorithm getN BestReorderings(sentence) described later. Dividing the data into N parts is necessary to ensure that the re-ranking model is trained using a realistic n-best list rather than a very optimistic n-best list (which would be the case if part i is reordered using a model which has already seen part i during training).",
"cite_spans": [
{
"start": 665,
"end": 692,
"text": "(Visweswariah et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 443,
"end": 451,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Overall approach",
"sec_num": "2.1"
},
{
"text": "Each of the n-best reorderings is then represented as a feature vector comprising of higher order and structural features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall approach",
"sec_num": "2.1"
},
{
"text": "The weights of these features are then estimated using the averaged perceptron method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall approach",
"sec_num": "2.1"
},
{
"text": "At test time, getN BestReorderings(sentence) is used to generate the n-best reorderings for the test sentence using the trained TSP model. These reorderings are then represented using higher order and structural features and re-ranked using the weights learned earlier. We now describe the different stages of our algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall approach",
"sec_num": "2.1"
},
{
"text": "The first stage of our approach is to train a TSP model and generate n-best reorderings using it. The decoder used by Visweswariah et al. (2011) relies on the Chained Lin-Kernighan heuristic (Lin and Kernighan, 1973) to produce the 1-best permutation for the TSP problem. Since our algorithm aims at re-ranking an n-best list of permutations (reorderings), we made a modification to the Chained Lin-Kernighan heuristic to produce this n-best list as shown in Algorithm 1 .",
"cite_spans": [
{
"start": 118,
"end": 144,
"text": "Visweswariah et al. (2011)",
"ref_id": "BIBREF24"
},
{
"start": 191,
"end": 216,
"text": "(Lin and Kernighan, 1973)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating n-best reorderings for the TSP model",
"sec_num": "2.2"
},
{
"text": "Algorithm 1 getN BestReorderings(sentence)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating n-best reorderings for the TSP model",
"sec_num": "2.2"
},
{
"text": "N bestSet = \u03c6 \u03c0 * = Identity permutation \u03c0 * = linkernighan(\u03c0 * ) insert(N bestSet, \u03c0 * ) for i = 1 \u2192 nIter do \u03c0 = perturb(\u03c0 * ) \u03c0 = linkernighan(\u03c0 ) if C(\u03c0 ) < max \u03c0\u2208N bestSet C(\u03c0) then InsertOrReplace(N bestSet, \u03c0 ) end if if C(\u03c0 ) < C(\u03c0 * ) then \u03c0 * = \u03c0 end if end for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating n-best reorderings for the TSP model",
"sec_num": "2.2"
},
{
"text": "In Algorithm 1 perturb() is a four-edge perturbation described in (Applegate et al., 2003) , and linkernighan() is the Lin-Kernighan heuristic that applies a sequence of flips that potentially returns a lower cost permutation as described in (Lin and Kernighan, 1973) . The cost C(\u03c0) is calculated using a trained TSP model.",
"cite_spans": [
{
"start": 66,
"end": 90,
"text": "(Applegate et al., 2003)",
"ref_id": "BIBREF1"
},
{
"start": 242,
"end": 267,
"text": "(Lin and Kernighan, 1973)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating n-best reorderings for the TSP model",
"sec_num": "2.2"
},
{
"text": "We represent each of the n-best reorderings obtained above as a vector of features which can be divided into two sets : (i) higher order features and (ii) struc- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "2.3"
},
{
"text": "Since deriving a good reordering would essentially require analyzing the syntactic structure of the source sentence, the tasks of reordering and parsing are often considered to be related. The main motivation for using higher order features thus comes from a related work on parsing (Koo and Collins, 2010) where the performance of a state of the art parser was improved by considering higher order dependencies. In our model we use trigram features (see Table 2 ) of the following form:",
"cite_spans": [
{
"start": 283,
"end": 306,
"text": "(Koo and Collins, 2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 455,
"end": 462,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Higher Order Features",
"sec_num": "2.3.1"
},
{
"text": "\u03c6(ru i , ru i+1 , ru i+2 , J(ru i , ru i+1 ), J(ru i+1 , ru i+2 ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Higher Order Features",
"sec_num": "2.3.1"
},
{
"text": "where ru i =word at position i in the reordered source sentence and J(x, y) = difference between the positions of x and y in the original source sentence. Figure 1 shows an example of jumps between different word pairs in an Urdu sentence. Since such higher order features will typically be sparse, we also use some back-off features. For example, instead of using the absolute values of jumps we divide the jumps into 3 buckets, viz., high, low and medium and use these buckets in conjunction with the triplets as back-off features. ",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Higher Order Features",
"sec_num": "2.3.1"
},
{
"text": "The second set of features is based on the hypothesis that any reordering of the source sentence induces a segmentation on the sentence. This segmentation is based on the following heuristic: if w i and w i+1 appear next to each other in the original sentence but do not appear next to each other in the reordered sentence then w i marks the end of a segment and w i+1 marks the beginning of the next segment. To understand this better please refer to Figure 1 which shows the correct reordering of an Urdu sentence based on its English translation and the corresponding segmentation induced on the Urdu sentence. If the correct segmentation of a sentence is known in advance then one could use a hierarchical model where the goal would be to reorder segments instead of reordering words individually (basically, instead of words, treat segments as units of reordering. In principle, this is similar to what is done by parser based reordering methods). Since the TSP model does not explicitly use segmentation based features it often produces wrong reorderings (refer to the motivating example in Section 1).",
"cite_spans": [],
"ref_spans": [
{
"start": 452,
"end": 458,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Structural Features",
"sec_num": "2.3.2"
},
{
"text": "Reordering such sentences correctly requires some knowledge about the hierarchical structure of the sentence. To capture such hierarchical information, we use features which look at the elements (words, pos tags) of a segment and its neighboring segments. These features along with examples are listed in Table 2 . These features should help us in selecting a reordering which induces a segmentation which is closest to the correct segmentation induced by the reference reordering. Note that every feature listed in Table 2 is a binary feature which takes on the value 1 if it fires for the given reordering and value 0 if it does not fire for the given reordering. In addition to the features listed in Table 2 we also use the score assigned by the TSP model as a feature.",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 312,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 516,
"end": 523,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 704,
"end": 711,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Structural Features",
"sec_num": "2.3.2"
},
{
"text": "We use perceptron as the learning algorithm for estimating the parameters of our model described in Equation 1. To begin with, all parameters are initialized to 0 and the learning algorithm is run for N iterations. During each iteration the parameters are updated after every training instance is seen. For example, during the i-th iteration, after seeing the j-th training sentence, we update the k-th parameter \u03b8 k using the following update rule: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating model parameters",
"sec_num": "2.4"
},
{
"text": "\u03b8 (i,j) k = \u03b8 (i,j\u22121) k + \u03c6 k (\u03c0 gold j ) \u2212 \u03c6 k (\u03c0 * j ) (3) where, \u03b8 (i,j) k =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating model parameters",
"sec_num": "2.4"
},
{
"text": "\u03c0 * j = arg max \u03c0\u2208\u03a0 j \u03b8 (i,j\u22121) T \u03c6(\u03c0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating model parameters",
"sec_num": "2.4"
},
{
"text": "where \u03a0 j is the set of n-best reorderings for the jth sentence. \u03c0 * j is thus the highest-scoring reordering for the j-th sentence under the current parameter vector. Since the averaged perceptron method is known to perform better than the perceptron method, we used the averaged values of the parameters at the end of N iterations, calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating model parameters",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 avg k = 1 N \u2022 t N i=1 t j=1 \u03b8 (i,j) k",
"eq_num": "(4)"
}
],
"section": "Estimating model parameters",
"sec_num": "2.4"
},
{
"text": "where, N = Number of iterations t = Number of training instances",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating model parameters",
"sec_num": "2.4"
},
{
"text": "We observed that in most cases the reference reordering in not a part of the n-best list produced by the TSP model. In such cases instead of using \u03c6 k (\u03c0 gold j ) for updating the weights in Equation 3 we use \u03c6 k (\u03c0 closest to gold j ) as this is known to be a better strategy for learning a re-ranking model (Arun and Koehn, 2007) . \u03c0 closest to gold j is given by:",
"cite_spans": [
{
"start": 309,
"end": 331,
"text": "(Arun and Koehn, 2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating model parameters",
"sec_num": "2.4"
},
{
"text": "arg max \u03c0 i j \u2208\u03a0 j # of common bigram pairs in \u03c0 i j and \u03c0 gold j len(\u03c0 gold j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating model parameters",
"sec_num": "2.4"
},
{
"text": "where, \u03a0 j = set of n-best reorderings for j th sentence \u03c0 closest to gold j is thus the reordering which has the maximum overlap with \u03c0 gold j in terms of the number of word pairs (w m , w n ) where w n is put next to w m .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating model parameters",
"sec_num": "2.4"
},
{
"text": "The approach described above aims at producing a better reordering by extracting richer features from the source sentence. Since the final aim is to improve the performance of an MT system, it would potentially be beneficial to interpolate the scores assigned by Equation 1 to a given reordering with the score assigned by the decoder of an MT system to the translation of the source sentence under this reordering. Intuitively, the MT score would allow us to capture features from the target sentence which are obviously not available to our model. With this motivation, we use the following interpolated score (score I ) to select the best translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpolating with MT score",
"sec_num": "2.5"
},
{
"text": "score I (t i ) = \u03bb\u2022score \u03b8 (\u03c0 i ) + (1 \u2212 \u03bb) \u2022 score M T (t i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpolating with MT score",
"sec_num": "2.5"
},
{
"text": "where, t i =translation produced under the i-th reordering of the source sentence score \u03b8 (\u03c0 i ) =score assigned by our model to the i-th reordering score M T (t i ) =score assigned by the MT system to t i The weight \u03bb is used to ensure that score \u03b8 (\u03c0 i ) and score M T (\u03c0 i ) are in the same range (it just serves as a normalization constant). We acknowledge that the above process is expensive because it requires the MT system to decode n reorderings for every source sentence. However, the aim of this work is to show that interpolating with the MT score which implicitly captures features from the target sentence helps in improving the performance. Ideally, this interpolation should (and can) be done at decode time without having to decode n reorderings for every source sentence (for example by expressing the n reorderings as a lattice), but, we leave this as future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpolating with MT score",
"sec_num": "2.5"
},
{
"text": "We evaluated our reordering approach on Urdu-English. We use two types of evaluation, one intrinsic and one extrinsic. For intrinsic evaluation, we compare the reordered source sentence in Urdu with a reference reordering obtained from the hand alignments using BLEU (referred to as monolingual BLEU or mBLEU by (Visweswariah et al., 2011) ). Additionally, we evaluate the effect of reordering on MT performance using BLEU (extrinsic evaluation).",
"cite_spans": [
{
"start": 312,
"end": 339,
"text": "(Visweswariah et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical evaluation",
"sec_num": "3"
},
{
"text": "As mentioned earlier, our training process involves two phases : (i) Generating n-best reorderings for the training data and (ii) using these n-best reorderings to train a perceptron model. We use the same data for training the reordering model as well as our perceptron model. This data contains 180K words of manual alignments (part of the NIST MT-08 training data) and 3.9M words of automatically generated machine alignments (1.7M words from the NIST MT-08 training data 1 and 2.2M words extracted from sources on the web 2 ). The machine alignments were generated using a supervised maximum entropy model (Ittycheriah and Roukos, 2005) and then corrected using an improved correction model (McCarley et al., 2011) . We first divide the training data into 10 folds. The n-best reorderings for each fold are then generated using a model trained on the remaining 9 folds. This division into 10 folds is done for reasons explained earlier in Section 2.1. These n-best reorderings are then used to train the perceptron model as described in Section 2.4. Note that Visweswariah et al. (2011) used only manually aligned data for training the TSP model. However, we use machine aligned data in addition to manually aligned data for training the TSP model as it leads to better performance. We used this improvised TSP model as the state of the art baseline (rows 2 and 3 in Tables 3 and 4 respectively) for comparing with our approach.",
"cite_spans": [
{
"start": 610,
"end": 640,
"text": "(Ittycheriah and Roukos, 2005)",
"ref_id": "BIBREF11"
},
{
"start": 695,
"end": 718,
"text": "(McCarley et al., 2011)",
"ref_id": "BIBREF17"
},
{
"start": 1064,
"end": 1090,
"text": "Visweswariah et al. (2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical evaluation",
"sec_num": "3"
},
{
"text": "We observed that the perceptron algorithm converges after 5 iterations beyond which there is very little (<1%) improvement in the bigram precision on the training data itself (bigram precision is the fraction of word pairs which are correctly put next to each other). Hence, for all the numbers reported in this paper, we used 5 iterations of perceptron training. Similarly, while generating the n-best reorderings, we experimented with following values of n : 10, 25, 50, 100 and 200. We observed that, by restricting the search space to the top-50 reorderings we get the best reordering performance (mBLEU) on a development set. Hence, we used n=50 for our MT experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical evaluation",
"sec_num": "3"
},
{
"text": "For intrinsic evaluation we use a development set of 8017 Urdu tokens reordered manually. Table 3 compares the performance of the top-1 reordering output by our algorithm with the top-1 reordering generated by the improved TSP model in terms of mBLEU. We see a gain of 1.8 mBLEU points with our approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Empirical evaluation",
"sec_num": "3"
},
{
"text": "Next, we see the impact of the better reorderings produced by our system on the performance of a state-of-the-art MT system. For this, we used a standard phrase based system (Al-Onaizan and Papineni, 2006) with a lexicalized distortion model with a window size of +/-4 words (Tillmann and Ney, 2003) . As mentioned earlier, our training data consisted of 3.9M words including the NIST MT-08 training data. We use HMM alignments along with higher quality alignments from a supervised aligner (McCarley et al., 2011) . The Gigaword English corpus was used for building the English language model. We report results on the NIST MT-08 evaluation set, averaging BLEU scores from the News and Web conditions to provide a single BLEU score. Table 4 compares the MT performance obtained by reordering the training and test data using the following approaches:",
"cite_spans": [
{
"start": 174,
"end": 205,
"text": "(Al-Onaizan and Papineni, 2006)",
"ref_id": "BIBREF0"
},
{
"start": 275,
"end": 299,
"text": "(Tillmann and Ney, 2003)",
"ref_id": "BIBREF21"
},
{
"start": 491,
"end": 514,
"text": "(McCarley et al., 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 734,
"end": 741,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Empirical evaluation",
"sec_num": "3"
},
{
"text": "1. No pre-ordering: A baseline system which does not use any source side reordering as a preprocessing step 2. HIERO : A state of the art hierarchical phrase based translation system (Chiang, 2007) 3. TSP: A system which uses the 1-best reordering produced by the TSP model which reranks n-best reorderings produced by TSP using higher order and structural features 5. Interpolating with MT score : A system which interpolates the score assigned to a reordering by our model with the score assigned by a MT system",
"cite_spans": [
{
"start": 183,
"end": 197,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical evaluation",
"sec_num": "3"
},
{
"text": "We used Joshua 4.0 (Ganitkevitch et al., 2012) which provides an open source implementation of HIERO. For training, tuning and testing HIERO we used the same experimental setup as described above. As seen in Table 4 , we get an overall gain of 6.2 BLEU points with our approach as compared to a baseline system which does not use any reordering. More importantly, we outperform (i) a PBSMT system which uses the TSP model by 1.3 BLEU points and (ii) a state of the art hierarchical phrase based translation system by 3 points.",
"cite_spans": [
{
"start": 19,
"end": 46,
"text": "(Ganitkevitch et al., 2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 208,
"end": 215,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Higher order & structural features: A system",
"sec_num": "4."
},
{
"text": "We now discuss some error corrections and ablation tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "4"
},
{
"text": "We first give an example where the proposed approach performed better than the TSP model. In the example below, I = input sentence, E= gold English translation, T = incorrect reordering produced by TSP and O = correct reordering produced by our approach. Note that the words roman catholic aur protestant in the input sentence get translated as We split the test data into roughly three equal parts based on length, and calculated the mBLEU improvements on each of these parts as reported in Table 5 . These results show that the model works much better for medium-to-long sentences. In fact, we see a drop in performance for small sentences. A possible reason for this could be that the structural features that we use are derived through a heuristic that is error-prone, and in shorter sentences, where there would be fewer reordering problems, these errors hurt more than they help. While this needs to be analyzed further, we could meanwhile combine the two models fruitfully by using the base TSP model for small sentences and the new model for longer sentences. ",
"cite_spans": [],
"ref_spans": [
{
"start": 492,
"end": 499,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Example of error correction",
"sec_num": "4.1"
},
{
"text": "To study the contribution of each feature to the reordering performance, we did an ablation test wherein we disabled one feature at a time and measured the change in the mBLEU scores. Table 6 summarizes the results of our ablation test. The maximum drop in performance is obtained when the pos triplet jumps feature is disabled. This observation supports our claim that higher order features (more than bigrams) are essential for better reordering. The lex triplet jumps feature has the least impact on the performance mainly because it is a lexicalized feature and hence very sparse. Also note that there is a high correlation between the performances obtained by dropping one feature from each of the following pairs : i) first lex current segment, first lex next segment ii) first pos current segment, first pos next segment iii) end lex current segment, end lex next segment. This is because these pairs of features are highly dependent features. Note that similar to the pos triplet jumps feature we also tried a pos quadruplet jumps feature but it did not help (mainly due to overfitting and sparsity).",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 191,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Ablation test",
"sec_num": "4.3"
},
{
"text": "There are several studies which have shown that reordering the source side sentence to match the target side order leads to improvements in Machine Translation. These approaches can be broadly classified into three types. First, approaches which reorder source sentences by applying rules to the source side parse; the rules are either hand-written (Collins et al., 2005; Wang et al., 2007; Ramanathan et al., 2009) or learned from data (Xia and McCord, 2004; Genzel, 2010; Visweswariah et al., 2010) . These approaches require a source side parser which is not available for many languages. The second type of approaches treat machine translation decoding as a parsing problem by using source and/or target side syntax in a Context Free Grammar framework. These include Hierarchical models (Chiang, 2007) and syntax based models (Yamada and Knight, 2002; Galley et al., 2006; Liu et al., 2006; Zollmann and Venugopal, 2006) . The third type of approaches, avoid the use of a parser (as required by syntax based models) and instead train a reordering model using reference reorderings derived from aligned data. These approaches (Tromble and Eisner, 2009; Visweswariah et al., 2011; DeNero and Uszkoreit, 2011; Neubig et al., 2012 ) have a low decode time complexity as reordering is done as a preprocessing step and not integrated with the decoder. Our work falls under the third category, as it improves upon the work of (Visweswariah et al., 2011) which is closely related to the work of (Tromble and Eisner, 2009) but performs better. The focus of our work is to use higher order and structural features (based on segmentation of the source sentence) which are not captured by their model. Some other works have used collocation based segmentation (Henr\u00edquez Q. et al., 2010) and Multiword Expressions as segments (Bouamor et al., 2012) to improve the performance of SMT but without much success. The idea of improving performance by reranking a n-best list of outputs has been used recently for the related task of parsing (Katz-Brown et al., 2011) using targeted self-training for improving the performance of reordering. However, in contrast, in our work we directly aim at improving the performance of a reordering model.",
"cite_spans": [
{
"start": 349,
"end": 371,
"text": "(Collins et al., 2005;",
"ref_id": "BIBREF5"
},
{
"start": 372,
"end": 390,
"text": "Wang et al., 2007;",
"ref_id": "BIBREF25"
},
{
"start": 391,
"end": 415,
"text": "Ramanathan et al., 2009)",
"ref_id": "BIBREF19"
},
{
"start": 437,
"end": 459,
"text": "(Xia and McCord, 2004;",
"ref_id": "BIBREF26"
},
{
"start": 460,
"end": 473,
"text": "Genzel, 2010;",
"ref_id": "BIBREF9"
},
{
"start": 474,
"end": 500,
"text": "Visweswariah et al., 2010)",
"ref_id": "BIBREF23"
},
{
"start": 791,
"end": 805,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 830,
"end": 855,
"text": "(Yamada and Knight, 2002;",
"ref_id": "BIBREF27"
},
{
"start": 856,
"end": 876,
"text": "Galley et al., 2006;",
"ref_id": "BIBREF7"
},
{
"start": 877,
"end": 894,
"text": "Liu et al., 2006;",
"ref_id": "BIBREF16"
},
{
"start": 895,
"end": 924,
"text": "Zollmann and Venugopal, 2006)",
"ref_id": "BIBREF28"
},
{
"start": 1129,
"end": 1155,
"text": "(Tromble and Eisner, 2009;",
"ref_id": "BIBREF22"
},
{
"start": 1156,
"end": 1182,
"text": "Visweswariah et al., 2011;",
"ref_id": "BIBREF24"
},
{
"start": 1183,
"end": 1210,
"text": "DeNero and Uszkoreit, 2011;",
"ref_id": "BIBREF6"
},
{
"start": 1211,
"end": 1230,
"text": "Neubig et al., 2012",
"ref_id": "BIBREF18"
},
{
"start": 1423,
"end": 1450,
"text": "(Visweswariah et al., 2011)",
"ref_id": "BIBREF24"
},
{
"start": 1491,
"end": 1517,
"text": "(Tromble and Eisner, 2009)",
"ref_id": "BIBREF22"
},
{
"start": 1752,
"end": 1779,
"text": "(Henr\u00edquez Q. et al., 2010)",
"ref_id": "BIBREF10"
},
{
"start": 1818,
"end": 1840,
"text": "(Bouamor et al., 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this work, we proposed a model for re-ranking the n-best reorderings produced by a state of the art reordering model (TSP model) which is limited to pair wise features. Our model uses a more informative set of features consisting of higher order features, structural features and target side features (captured indirectly using translation scores). The problem of intractability is solved by restricting the search space to the n-best reorderings produced by the TSP model. A detailed ablation test shows that of all the features used, the pos triplet features are most informative for reordering. A gain of 1.3 and 3 BLEU points over a state of the art phrase based and hierarchical machine translation system respectively provides good extrinsic validation of our claim that such long range features are useful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "As future work, we would like to evaluate our algorithm on other language pairs. We also plan to integrate the score assigned by our model into the decoder to avoid having to do n decodings for every source sentence. Also, it would be interesting to model the segmentation explicitly, where the aim would be to first segment the sentence and then use a two level hierarchical reordering model which first reorders these segments and then reorders the words within the segment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://www.ldc.upenn.edu 2 http://centralasiaonline.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Distortion models for statistical machine translation",
"authors": [
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
},
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
}
],
"year": 2006,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "529--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaser Al-Onaizan and Kishore Papineni. 2006. Dis- tortion models for statistical machine translation. In Proceedings of ACL, ACL-44, pages 529-536, Mor- ristown, NJ, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Chained lin-kernighan for large traveling salesman problems",
"authors": [
{
"first": "David",
"middle": [],
"last": "Applegate",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Andre",
"middle": [],
"last": "Rohe",
"suffix": ""
}
],
"year": 2003,
"venue": "In INFORMS Journal On Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Applegate, William Cook, and Andre Rohe. 2003. Chained lin-kernighan for large traveling salesman problems. In INFORMS Journal On Computing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Online learning methods for discriminative training of phrase based statistical machine translation",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Arun",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of MT Summit",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Arun and Philipp Koehn. 2007. Online learning methods for discriminative training of phrase based statistical machine translation. In In Proceed- ings of MT Summit.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Identifying bilingual multiword expressions for statistical machine translation",
"authors": [
{
"first": "Dhouha",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "Nasredine",
"middle": [],
"last": "Semmar",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Khalid",
"middle": [],
"last": "Choukri",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Declerck",
"suffix": ""
},
{
"first": "Bente",
"middle": [],
"last": "Mehmet Uur Doan",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Maegaard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mariani",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dhouha Bouamor, Nasredine Semmar, and Pierre Zweigenbaum. 2012. Identifying bilingual multi- word expressions for statistical machine translation. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Uur Doan, Bente Maegaard, Joseph Mariani, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC'12), Istanbul, Turkey, may. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Comput. Linguist",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical phrase-based transla- tion. Comput. Linguist., 33(2):201-228, June.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Clause restructuring for statistical machine translation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Ivona",
"middle": [],
"last": "Ku\u010derov\u00e1",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "531--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins, Philipp Koehn, and Ivona Ku\u010derov\u00e1. 2005. Clause restructuring for statistical machine translation. In Proceedings of ACL, pages 531-540, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Inducing sentence structure from parallel corpora for reordering",
"authors": [
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "193--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John DeNero and Jakob Uszkoreit. 2011. Inducing sen- tence structure from parallel corpora for reordering. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing, EMNLP '11, pages 193-203, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Scalable inference and training of context-rich syntactic translation models",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Graehl",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Deneefe",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Thayer",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44",
"volume": "",
"issue": "",
"pages": "961--968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceed- ings of the 21st International Conference on Compu- tational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 961-968, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Joshua 4.0: Packing, pro, and paraphrases",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Weese",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "283--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Yuan Cao, Jonathan Weese, Matt Post, and Chris Callison-Burch. 2012. Joshua 4.0: Pack- ing, pro, and paraphrases. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 283-291, Montr\u00e9al, Canada, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatically learning sourceside reordering rules for large scale machine translation",
"authors": [
{
"first": "Dmitriy",
"middle": [],
"last": "Genzel",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10",
"volume": "",
"issue": "",
"pages": "376--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitriy Genzel. 2010. Automatically learning source- side reordering rules for large scale machine transla- tion. In Proceedings of the 23rd International Con- ference on Computational Linguistics, COLING '10, pages 376-384, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Using collocation segmentation to augment the phrase table",
"authors": [
{
"first": "A",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Henr\u00edquez",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Vidas",
"middle": [],
"last": "Daudaravicius",
"suffix": ""
},
{
"first": "E",
"middle": [
"Rafael"
],
"last": "Banchs",
"suffix": ""
},
{
"first": "B. Jos\u00e9",
"middle": [],
"last": "Mari\u00f1o",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and Metric-sMATR, WMT '10",
"volume": "",
"issue": "",
"pages": "98--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Carlos Henr\u00edquez Q., R. Marta Costa-juss\u00e0, Vidas Daudaravicius, E. Rafael Banchs, and B. Jos\u00e9 Mari\u00f1o. 2010. Using collocation segmentation to augment the phrase table. In Proceedings of the Joint Fifth Work- shop on Statistical Machine Translation and Metric- sMATR, WMT '10, pages 98-102, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A maximum entropy word aligner for Arabic-English machine translation",
"authors": [
{
"first": "Abraham",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT/EMNLP, HLT '05",
"volume": "",
"issue": "",
"pages": "89--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abraham Ittycheriah and Salim Roukos. 2005. A max- imum entropy word aligner for Arabic-English ma- chine translation. In Proceedings of HLT/EMNLP, HLT '05, pages 89-96, Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Training a parser for machine translation reordering",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Katz-Brown",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Talbot",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Ichikawa",
"suffix": ""
},
{
"first": "Masakazu",
"middle": [],
"last": "Seno",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11",
"volume": "",
"issue": "",
"pages": "183--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Katz-Brown, Slav Petrov, Ryan McDonald, Franz Och, David Talbot, Hiroshi Ichikawa, Masakazu Seno, and Hideto Kazawa. 2011. Training a parser for machine translation reordering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11, pages 183-192, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceed- ings of HLT-NAACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Efficient thirdorder dependency parsers",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo and Michael Collins. 2010. Efficient third- order dependency parsers. In Proceedings of the 48th",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Association for Computational Linguistics. S. Lin and B. W. Kernighan. 1973. An effective heuristic algorithm for the travelling-salesman problem",
"authors": [],
"year": null,
"venue": "Annual Meeting of the Association for Computational Linguistics, ACL '10",
"volume": "",
"issue": "",
"pages": "498--516",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, ACL '10, pages 1-11, Stroudsburg, PA, USA. Association for Computational Linguistics. S. Lin and B. W. Kernighan. 1973. An effective heuristic algorithm for the travelling-salesman problem. Oper- ations Research, pages 498-516.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Tree-tostring alignment template for statistical machine translation",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44",
"volume": "",
"issue": "",
"pages": "609--616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-to- string alignment template for statistical machine trans- lation. In Proceedings of the 21st International Con- ference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 609-616, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A correction model for word alignments",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Mccarley",
"suffix": ""
},
{
"first": "Abraham",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Jian-Ming",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "889--898",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Scott McCarley, Abraham Ittycheriah, Salim Roukos, Bing Xiang, and Jian-ming Xu. 2011. A correc- tion model for word alignments. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11, pages 889-898, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Inducing a discriminative parser to optimize machine translation reordering",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "843--853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Taro Watanabe, and Shinsuke Mori. 2012. Inducing a discriminative parser to optimize machine translation reordering. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natu- ral Language Processing and Computational Natural Language Learning, pages 843-853, Jeju Island, Ko- rea, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Case markers and morphology: addressing the crux of the fluency problem in English-Hindi smt",
"authors": [
{
"first": "Ananthakrishnan",
"middle": [],
"last": "Ramanathan",
"suffix": ""
},
{
"first": "Hansraj",
"middle": [],
"last": "Choudhary",
"suffix": ""
},
{
"first": "Avishek",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ananthakrishnan Ramanathan, Hansraj Choudhary, Avishek Ghosh, and Pushpak Bhattacharyya. 2009. Case markers and morphology: addressing the crux of the fluency problem in English-Hindi smt. In Proceedings of ACL-IJCNLP.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A unigram orientation model for statistical machine translation",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Tillman",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Tillman. 2004. A unigram orientation model for statistical machine translation. In Proceedings of HLT-NAACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Word reordering and a dynamic programming beam search algorithm for statistical machine translation",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "97--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Tillmann and Hermann Ney. 2003. Word re- ordering and a dynamic programming beam search al- gorithm for statistical machine translation. Computa- tional Linguistics, 29(1):97-133.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning linear ordering problems for better translation",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Tromble and Jason Eisner. 2009. Learning linear or- dering problems for better translation. In Proceedings of EMNLP.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Syntax based reordering with automatically derived rules for improved statistical machine translation",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Visweswariah",
"suffix": ""
},
{
"first": "Jiri",
"middle": [],
"last": "Navratil",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthik Visweswariah, Jiri Navratil, Jeffrey Sorensen, Vijil Chenthamarakshan, and Nandakishore Kamb- hatla. 2010. Syntax based reordering with automat- ically derived rules for improved statistical machine translation. In Proceedings of the 23rd International Conference on Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A word reordering model for improved machine translation",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Visweswariah",
"suffix": ""
},
{
"first": "Rajakrishnan",
"middle": [],
"last": "Rajkumar",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Gandhe",
"suffix": ""
},
{
"first": "Ananthakrishnan",
"middle": [],
"last": "Ramanathan",
"suffix": ""
},
{
"first": "Jiri",
"middle": [],
"last": "Navratil",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "486--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthik Visweswariah, Rajakrishnan Rajkumar, Ankur Gandhe, Ananthakrishnan Ramanathan, and Jiri Navratil. 2011. A word reordering model for improved machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11, pages 486-496, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Chinese syntactic reordering for statistical machine translation",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Wang, Michael Collins, and Philipp Koehn. 2007. Chinese syntactic reordering for statistical machine translation. In Proceedings of EMNLP-CoNLL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Improving a statistical MT system with automatically learned rewrite patterns",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Mccord",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Xia and Michael McCord. 2004. Improving a sta- tistical MT system with automatically learned rewrite patterns. In COLING.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A decoder for syntax-based statistical mt",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Yamada and Kevin Knight. 2002. A decoder for syntax-based statistical mt. In Proceedings of ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Syntax augmented machine translation via chart parsing",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Zollmann",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings on the Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proceedings on the Workshop on Statistical Machine Translation.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Segmentation induced on the Urdu sentence when it is reordered according to its English translation. Note that the words Shyam and mere are adjacent to each other in the original Urdu sentence but not in the reordered Urdu sentence. Hence, the word mere marks the beginning of a new segment.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "value of the k-th parameter after seeing sentence j in iteration i \u03c6 k = k-th feature \u03c0 gold j = gold reordering for the j-th sentence",
"type_str": "figure"
},
"TABREF1": {
"text": "Features used in our model. tural features. The higher order features are essentially trigram lexical and pos features whereas the structural features are derived from the sentence structure induced by a reordering (explained later).",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF3": {
"text": "mBLEU scores for Urdu to English reordering using different models.",
"type_str": "table",
"content": "<table><tr><td>Approach</td><td>BLEU</td></tr><tr><td>No pre-ordering</td><td>21.9</td></tr><tr><td>HIERO</td><td>25.2</td></tr><tr><td>TSP</td><td>26.9</td></tr><tr><td>Higher order &amp; structural features</td><td>27.5</td></tr><tr><td>Interpolating with MT score</td><td>28.2</td></tr></table>",
"num": null,
"html": null
},
"TABREF4": {
"text": "MT performance for Urdu to English without reordering and with reordering using different approaches.",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF7": {
"text": "Ablation test indicating the contribution of each feature to the reordering performance.",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
}
}
}
}