|
{ |
|
"paper_id": "N03-1010", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:06:59.316790Z" |
|
}, |
|
"title": "Greedy Decoding for Statistical Machine Translation in Almost Linear Time", |
|
"authors": [ |
|
{ |
|
"first": "Ulrich", |
|
"middle": [], |
|
"last": "Germann", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "USC Information Sciences Institute Marina del Rey", |
|
"location": { |
|
"region": "CA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present improvements to a greedy decoding algorithm for statistical machine translation that reduce its time complexity from at least cubic (\u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 when applied na\u00efvely) to practically linear time 1 without sacrificing translation quality. We achieve this by integrating hypothesis evaluation into hypothesis creation, tiling improvements over the translation hypothesis at the end of each search iteration, and by imposing restrictions on the amount of word reordering during decoding. \u00a9 that maximizes the translation probability \u00a1 \u00a9 \u00a7 for the given input. Knight (1999) has shown the problem to be NP-complete. Due to the complexity of the task, practical MT systems usually do not employ optimal decoders (that is, decoders that are guaranteed to find an optimal solution within the constraints of the framework), but rely on approximative algorithms instead. Empirical evidence suggests that such algorithms can perform resonably well. For example, Berger et al. (1994), attribute only 5% of the translation errors of their Candide system, which uses", |
|
"pdf_parse": { |
|
"paper_id": "N03-1010", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present improvements to a greedy decoding algorithm for statistical machine translation that reduce its time complexity from at least cubic (\u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 when applied na\u00efvely) to practically linear time 1 without sacrificing translation quality. We achieve this by integrating hypothesis evaluation into hypothesis creation, tiling improvements over the translation hypothesis at the end of each search iteration, and by imposing restrictions on the amount of word reordering during decoding. \u00a9 that maximizes the translation probability \u00a1 \u00a9 \u00a7 for the given input. Knight (1999) has shown the problem to be NP-complete. Due to the complexity of the task, practical MT systems usually do not employ optimal decoders (that is, decoders that are guaranteed to find an optimal solution within the constraints of the framework), but rely on approximative algorithms instead. Empirical evidence suggests that such algorithms can perform resonably well. For example, Berger et al. (1994), attribute only 5% of the translation errors of their Candide system, which uses", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Most of the current work in statistical machine translation builds on word replacement models developed at IBM in the early 1990s (Brown et al., 1990 (Brown et al., , 1993 Berger et al., 1994 Berger et al., , 1996 . Based on the conventions established in Brown et al. (1993) , these models are commonly referred to as the (IBM) Models 1-5.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 149, |
|
"text": "(Brown et al., 1990", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 150, |
|
"end": 171, |
|
"text": "(Brown et al., , 1993", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 191, |
|
"text": "Berger et al., 1994", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 192, |
|
"end": 213, |
|
"text": "Berger et al., , 1996", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 256, |
|
"end": 275, |
|
"text": "Brown et al. (1993)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One of the big challenges in building actual MT systems within this framework is that of decoding: finding the translation candidate a restricted stack search, to search errors. Using the same evaluation metric (but different evaluation data), Wang and Waibel (1997) report search error rates of 7.9% and 9.3%, respectively, for their decoders. Och et al. (2001) and Germann et al. (2001) both implemented optimal decoders and benchmarked approximative algorithms against them. Och et al. report word error rates of 68.68% for optimal search (based on a variant of the A* algorithm), and 69.65% for the most restricted version of a decoder that combines dynamic programming with a beam search (Tillmann and Ney, 2000) . Germann et al. (2001) compare translations obtained by a multi-stack decoder and a greedy hill-climbing algorithm against those produced by an optimal integer programming decoder that treats decoding as a variant of the traveling-salesman problem (cf. Knight, 1999) . Their overall performance metric is the sentence error rate (SER). For decoding with IBM Model 3, they report SERs of about 57% (6-word sentences) and 76% (8-word sentences) for optimal decoding, 58% and 75% for stack decoding, and 60% and 75% for greedy decoding, which is the focus of this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 244, |
|
"end": 266, |
|
"text": "Wang and Waibel (1997)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 345, |
|
"end": 362, |
|
"text": "Och et al. (2001)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 367, |
|
"end": 388, |
|
"text": "Germann et al. (2001)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 693, |
|
"end": 717, |
|
"text": "(Tillmann and Ney, 2000)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 720, |
|
"end": 741, |
|
"text": "Germann et al. (2001)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 972, |
|
"end": 985, |
|
"text": "Knight, 1999)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "All these numbers suggest that approximative algorithms are a feasible choice for practical applications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The purpose of this paper is to describe speed improvements to the greedy decoder mentioned above. While acceptably fast for the kind of evaluation used in Germann et al. (2001) , namely sentences of up to 20 words, its speed becomes an issue for more realistic applications. Brute force translation of the 100 short news articles in Chinese from the TIDES MT evaluation in June 2002 (878 segments; ca. 25k tokens) requires, without any of the improvements described in this paper, over 440 CPU hours, using the simpler, \"faster\" algorithm # \u00a2 $ (described below). We will show that this time can be reduced to ca. 40 minutes without sacrificing translation quality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 177, |
|
"text": "Germann et al. (2001)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the following, we first describe the underlying IBM model(s) of machine translation (Section 2) and our hillclimbing algorithm (Section 3). In Section 4, we discuss improvements to the algorithm and its implementation, and the effect of restrictions on word reordering. Brown et al. (1993) and Berger et al. (1994 Berger et al. ( , 1996 view the problem of translation as that of decoding a message that has been distorted in a noisy channel.", |
|
"cite_spans": [ |
|
{ |
|
"start": 273, |
|
"end": 292, |
|
"text": "Brown et al. (1993)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 316, |
|
"text": "Berger et al. (1994", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 339, |
|
"text": "Berger et al. ( , 1996", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Exploiting Bayes' theorem \u00a1 \u00a9 \u00a7 \u00a2 \u00a1 \u00a1 \u00a7 \u00a4 \u00a3 \u00a1 \u00a9 \u00a7 \u00a5 \u00a1 \u00a1 \u00a9 \u00a7", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "The IBM Translation Models", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "they recast the problem of finding the best translation \u00a6 \u00a9 for a given input", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The IBM Translation Models", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "as \u00a6 \u00a9 \u00a3 \u00a7 \u00a9 \u00a7 \u00a1 \u00a9 \u00a7 \u00a5 \u00a1 \u00a1 \u00a9 \u00a7 (2) \u00a1 \u00a7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The IBM Translation Models", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "is constant for any given input and can therefore be ignored.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The IBM Translation Models", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "is typically calculated using an n-gram language model. For the sake of simplicity, we assume here and everywhere else in the paper that the ultimate task is to translate from a foreign language into English.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a1 \u00a9 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The model pictures the conversion from English to a foreign language roughly as follows (cf. Fig. 1 ; note that because of the noisy channel approach, the modeling is \"backwards\").", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 99, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "\u00a1 \u00a9 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each English word \u00a9 , a fertility \u00a3 (with \u00a3 \" ! $ # & % ) is chosen. \u00a3 is called the fertility of \u00a9 . Each word \u00a9 is replaced by \u00a3 foreign words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a1 \u00a9 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "After that, the linear order of the foreign words is rearranged.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a1 \u00a9 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, a certain number \u00a3 % of so-called spurious words (words that have no counterpart in the original English) are inserted into the foreign text. The probability of the value of \u00a3 % depends on the length ' of the original English string.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a1 \u00a9 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As a result, each foreign word is linked, by virtue of the derivation history, to either nothing (the imaginary NULL word), or exactly one word of the English source sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a1 \u00a9 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The triple", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a1 \u00a9 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "( ) \u00a3 0 2 1 4 3 6 5 7 3 9 8 A @ with 1 ) \u00a3 0 # C B E D F D 3 \u00a9 G 3 I H P H P H P 3 \u00a9 @ , 5 ) \u00a3 Q 0 R G 3 P H I H P H S 3 T @ ,", |
|
"eq_num": "and" |
|
} |
|
], |
|
"section": "\u00a1 \u00a9 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "8 ) E U $ 3 P H P H I H V 3 W Y X \u00e0 U c b 3 $ 3 P H P H I H V 3 e d X", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a1 \u00a9 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "is called a sentence alignment. For all pairs", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a1 \u00a9 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "0 f d $ 3 $ W A @ such that 8 \u00a1 \" W \u00a7 & \u00a3 g d , we say that \u00a9", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a1 \u00a9 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "is aligned with", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a1 \u00a9 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ", and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "T", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "R T with \u00a9", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "T", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ", respectively. Since each of the changes occurs with a certain probability, we can calculate the translation model probability of ( as the product of the individual probabilities of each of the changes. The product of the translation model probability and the language model probability of 1 is called the alignment probability of ( . Detailed formulas for the calculation of alignment probabilities according to the various models can be found in Brown et al. (1993) . It should be noted here that the calculation of the alignment probability of an entire alignment (", |
|
"cite_spans": [ |
|
{ |
|
"start": 449, |
|
"end": 468, |
|
"text": "Brown et al. (1993)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "T", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1 i h $ p r q s u t $ p", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "T", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ") has linear complexity. Well will show below that by re-evaluating only fractions of an alignment (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "T", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1 i p r q v w t $ p", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "T", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "), we can reduce the evaluation cost to a constant time factor.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "T", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The task of the decoder is to revert the process just described. In this subsection we recapitulate the greedy hillclimbing algorithm presented in Germann et al. (2001) . In contrast to all other decoders mentioned in Sec. 1, this algorithm does not process the input one word at a time to incrementally build up a full translation hypothesis. Instead, it starts out with a complete gloss of the input sentence, aligning each input word with the word \u00a9 that maximizes the inverse (with respect to the noisy channel approach) translation probability x \u00a1 \u00a9 \u00a7", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 168, |
|
"text": "Germann et al. (2001)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ". (Note that for the calculation of the alignment probability,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "x \u00a1 \u00a9 \u00a7 is used.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The decoder then systematically tries out various types of changes to the alignment: changing the translation of a word, inserting extra words, reordering words, etc. These change operations are described in more detail below. In each search iteration, the algorithm makes a complete pass over the alignment, evaluating all possible changes. The simpler, \"faster\" version # \u00a2 $ of the algorithm considers only one operation at a time. A more thorough variant # \u00a1 applies up to two word translation changes, or inserts one zero fertility word in addition to a word translation change before the effect of these changes is evaluated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "At the end of the iteration, the decoder permanently applies that change, or, in the case of # \u00a1 , change combination, that leads to the biggest improvement in alignment probability, and then starts the next iteration. This cycle is repeated until no more improvements can be found.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The changes to the alignment that the decoder considers are as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "CHANGE the translation of a word: For a given foreign word ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "# \u00a2 $ has a com- plexity of \u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 : for each word", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", there is a certain probability that changing the word translation of requires a pass over the complete English hypothesis in order to find the best insertion point. This is the case when is currently either spurious (that is, aligned with the NULL word), or aligned with a word with a fertility of more than one. The probability of this happening, however, is fairly small, so that we can assume for all practical purposes that a CHANGE iteration in ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "in practice. We will argue below that by exploiting the notion of change dependencies, the complexity for CHANGE can be reduced to practically \u00a2 \u00a1\u00a3 \u00a7 for # \u00a1 decoding as well, albeit with a fairly large coefficient.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "INSERT a so-called zero fertility word (i.e., an English word that is not aligned to any foreign word) into the English string. Since all possible positions in the English hypothesis have to be considered, , 2 empirical data indicates that its actual time consumption is very small (cf. Fig. 6 ). This is because the chances of success of a join operation can be determined very cheaply without actually performing the operation. Suppose for the sake of simplicity that", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 293, |
|
"text": "Fig. 6", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u00a9 # 1 \u00a1 \u00a3 \u00a2 \u00a1\u00a3 \u00a7 ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u00a9 q q # !", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "is aligned with only one word . If the translation probability", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "x \u00a1 \u00a9 t \u00a7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "is zero (which is true most of the time), the resulting alignment probability will be zero. Therefore, we can safely skip such operations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "SWAP any two non-overlapping regions ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u00a3 is $ & % G ' t ( G $ & % G ' s ) ( t \u00a1 \u00a4 \u00a3 1 0 3 2 \u00a7\u00a1 \u00a4 \u00a3 4 0 5 2 7 6 $ \u00a7 \u00a3 \u00a3 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 8 0 $ \u00a7\u00a1 \u00a4 \u00a3 9 6 \u00a7 \u00a4 @ Thus, B A ( D C $ \u00a4 E $ G F E v F # \u00a3 \u00a2 \u00a1\u00a3 H \u00a7 \u00a7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ". However, if we limit the size of the swapped regions to a constant I and their distance to a constant P , we can reduce the number of swaps performed to a linear function of the input length. For each start position (defined as the first word of the first swap region), there are at most", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "swaps that can be performed within these limitations. Therefore,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P & I \u00a5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "B A ( E $ G F E v F \u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P & I \u00a5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ". It is obvious that the baseline version of this algorithm is very inefficient. In the following subsection, we discuss the algorithm's complexity in more detail. In Sec. 4, we show how the decoding complexity can be reduced.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P & I \u00a5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The total decoding complexity of the search algorithm is the number of search iterations (I) times the number of search steps per search iteration (S) times the evaluation cost per search step (E):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Complexity", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "# \u00a2 $ \u00a3 1 \u00a1 \u00a1 \u00a9 H", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Complexity", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We now show that the original implementation of the algorithm has a complexity of (practically)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Complexity", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u00a2 \u00a1 \u00a4 \u00a3 R Q \u00a7 for # \u00a2 $", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Complexity", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "decoding, and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Complexity", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u00a2 \u00a1\u00a3 \u00a7 \u00a7 for # \u00a1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Complexity", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "decoding, if swap operations are restricted. With unrestricted swapping, the complexity is \u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Complexity", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ". Since our argument is based on some assumptions that cannot be proved formally, we cannot provide a formal complexity proof.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Complexity", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ". In the original implementation of the algorithm, the entire alignment is evaluated after each search step (global evaluation, or", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1 h $ p r q s \" t V p", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "). Therefore, the evaluation cost rises linearly with the length of the hypothesized alignment: The evaluation requires two passes over the English hypothesis (n-grams for the language model; fertility probabilities) and two passes over the input string (translation and distortion probabilities). We assume a high correlation between input length and the hypothesis length. Thus, ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1 i h pq 6 s u t $ p \u00a3 \u00a2 \u00a1\u00a3 \u00a7 . 2 There are \u00a6 T S V U!", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u00a9 \u00a3 \u00a2 \u00a1 \u00a4 \u00a3 \u00a7 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The original algorithm pursues a highly inefficient search strategy. At the end of each iteration, only the single best improvement is executed; all others, even when independent, are discarded. In other words, the algorithm needs one search iteration per improvement. We assume that there is a linear correlation between input length and the number of improvements -an assumption that is supported by the empirical data in Fig. 4 . Therefore,", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 424, |
|
"end": 430, |
|
"text": "Fig. 4", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u00a9 \u00a3 \u00a2 \u00a1\u00a3 \u00a7 . \u00a3 \u00a2 \u00a1 \u00a4 \u00a3 \u00a7 (# \u00a2 $ , restricted swapping) \u00a3 \u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 (# \u00a1 , restricted swapping) \u00a3 \u00a2 \u00a1 \u00a4 \u00a3 \u00a7 \u00a7 (no restrictions on swapping).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The number of search steps per iteration is the sum of the number of search steps for CHANGE, SWAP, JOIN, INSERT, and ERASE. The highest order term in this sum is unrestricted SWAP with \u00a2 \u00a1\u00a3 \u00a7 \u00a7 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "With restricted swapping, S has a theoretical complexity of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 (due to JOIN) in # \u00a2 $", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "decoding, but the contribution of the JOIN operation to overall time consumption is so small that it can be ignored for all practical purposes. Therefore, the average complexity of in practice is \u00a2 \u00a1 \u00a4 \u00a3 \u00a7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": ", and the total complexity of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "# \u00a2 $ in practice is # \u00a2 $ E $ G F E v F \u00a1 t \u00a2 \u00a3 \u00a2 $ h \u00a3 \u00a2 \u00a1\u00a3 \u00a7 \u00a1 \u00a2 \u00a1\u00a3 \u00a7 \u00a1 \u00a2 \u00a1\u00a3 \u00a7 F \u00a3 \u00a2 \u00a1 \u00a4 \u00a3 \u00a6 Q \u00a7 S H In # \u00a1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "decoding, which combines up to two CHANGE operations or one CHANGE operation and one INSERT operation, has a practical complexity of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u00a2 \u00a1 # \u00a5 \u00a7 , so that # \u00a1 E $ G F E v F \u00a1 t \u00a2 \u00a3 \u00a2 $ h \u00a3 \u00a2 \u00a1\u00a3 \u00a7 F \u00a1 \u00a2 \u00a1\u00a3 \u00a7 F \u00a1 \u00a2 \u00a1\u00a3 \u00a6 \u00a5 \u00a7 i \u00a3 \u00a2 \u00a1 \u00a4 \u00a3 \u00a7 \u00a7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": ". We discuss below how can be reduced to practically linear time for # \u00a1 decoding as well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a3 \u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Every change to the alignment affects only a few of the individual probabilities that make up the overall alignment score: the n-gram contexts of those places in the English hypothesis where a change occurs, plus a few translation model probabilities. We call the -not necessarily contiguous -area of an alignment that is affected by a change the change's local context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reducting Decoder Complexity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "With respect to an efficient implementation of the greedy search, we can exploit the notion of local contexts in two ways. First, we can limit probability recalculations to the local context (that is, those probabilities that actually are affected by the respective change), and secondly, we can develop the notion of change dependencies: Two changes are independent if their local contexts do not overlap. As we will explain below, we can use this notion to devise a scheme of improvement caching and tiling (ICT) that greatly reduces the total number of alignments considered during the search.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reducting Decoder Complexity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our argument is that local probability calculations and ICT each reduce the complexity of the algorithm by practically", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reducting Decoder Complexity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": ", that is, from", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u00a2 \u00a1\u00a3 \u00a5 \u00a4 \u00a7 to \u00a2 \u00a1\u00a3 \u00a6 \u00a4 % G \u00a7 with \u00a7 \u00a9 $ . Thus, the complexity for # \u00a2 $ decreases from \u00a2 \u00a1\u00a3 R Q \u00a7 to \u00a2 \u00a1\u00a3 \u00a7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ". If we limit the search space for the second operation (CHANGE or INSERT) in # \u00a1 decoding to its local context, # \u00a1 decoding, too, has practically linear complexity, even though with a much higher coefficient (cf Fig. 6 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 214, |
|
"end": 220, |
|
"text": "Fig. 6", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "\u00a2 \u00a1\u00a3 \u00a7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The complexity of calculating the alignment probability globally (that is, over the entire alignment) is \u00a2 \u00a1\u00a3 \u00a7 . However, since there is a constant upper bound 3 on the size of local contexts,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Probability Calculations", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "1 i h $ p r q s u t $ p", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Probability Calculations", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "needs to be performed only once for the initial gloss, therafter, recalculation of only those probabilities affected by each change (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Probability Calculations", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "1 p r q v w t $ p \u00a3 \u00a2 \u00a1 $ \u00a7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Probability Calculations", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ") suffices. This reduces the overall decoding complexity from", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Probability Calculations", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u00a2 \u00a1\u00a3 \u00a4 \u00a7 to \u00a2 \u00a1\u00a3 \u00a4 % G \u00a7 with \u00a7 $ .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Probability Calculations", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Even though profoundly trivial, this improvement significantly reduces translation times, especially when improvements are not tiled (cf. below and Fig. 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 154, |
|
"text": "Fig. 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Local Probability Calculations", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Based on the notions of local contexts and change dependencies, we devised the following scheme of improvement caching and tiling (ICT): During the search, we keep track of the best possible change affecting each local context. (In practice, we maintain a map that maps from initial gloss us localities computer system suffer computer virus attack and refused service attack and there various security loopholes instance everywhere Figure 3 : A decoding trace using improvement caching and tiling (ICT). The search in the second and later iterations is limited to areas where a change has been applied (marked in bold print) -note that the number of alignment checked goes down over time. The higher number of alignments checked in the second iteration is due to the insertion of an additional word, which increases the number of possible swap and insertion operations. Decoding without ICT results in the same translation but requires 11 iterations and checks a total of 17701 alignments as opposed to 5 iterations with a total of 4464 alignments with caching. the local context of each change that has been considered to the best change possible that affects exactly this context.) At the end of the search iteration d , we apply a very restricted stack search to find a good tiling of nonoverlapping changes, all of which are applied. The goal of this stack search is to find a tiling that maximizes the overal gain in alignment probability. Possible improvements that overlap with higher-scoring ones are ignored. In the following search iteration d 6 $ , we restrict the search to changes that overlap with changes just applied. We can safely assume that there are no improvements to be found that are independent of the changes applied at the end of iteration d : If there were such improvements, they would have been found in and applied after iteration d . Figure 3 illustrates the procedure.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 432, |
|
"end": 440, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1865, |
|
"end": 1873, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Improvement Caching and Tiling 4 (ICT)", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We assume that improvements are, on average, evenly distributed over the input text. Therefore, we can expect the number of places where improvements can be applied to grow with the input length at the same rate as the number of improvements. Without ICT, the number of iterations grows linearly with the input length, as shown in Fig. 4 . With ICT, we can parallelize the improvement process and thus reduce the number of iterations for each search to a constant upper bound, which will be determined by the average 'improvement density' of the domain. One exception to this rule should be noted: since the expected number of spurious words (words with no counterpart in English) in the input is a function of the input length, and since all changes in word translation that involve the NULL word are mutually dependent, we should expect to find a very weak effect of this on the number of search iterations. Indeed, the scatter diagram in Fig.4 suggests a slight increase in the number of iterations as the input length increases. 5 At the same time, however, the number of changes considered during each search iteration eventually decreases, because subsequent search iterations are limited to areas where a change was previously performed. Empirical evidence as plotted on the right in Fig. 4 suggests that this effect \"neutralizes\" the increase in iterations in dependence of the input length: the total number of changes considered indeed appears to grow linearly with the input length. It should be noted that ICT, while it does change the course of the search, primarily avoids redundant search steps -it does not necessarily search a smaller search space, but searches it only once. The total number of improvements found is roughly the same (15,299 with ICT, 14,879 without for the entire test corpus with a maximum swap distance of 2 and a maximum swap segment size of 5). ", |
|
"cite_spans": [ |
|
{ |
|
"start": 1033, |
|
"end": 1034, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 331, |
|
"end": 337, |
|
"text": "Fig. 4", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 941, |
|
"end": 946, |
|
"text": "Fig.4", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 1291, |
|
"end": 1297, |
|
"text": "Fig. 4", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Improvement Caching and Tiling 4 (ICT)", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "With \u00a2 \u00a1 \u00a4 \u00a3 \u00a7 \u00a7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Restrictions on Word Reordering", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": ", unlimited swapping swapping is by far the biggest consumer of processing time during decoding. When translating the Chinese test corpus from the 2002 TIDES MT evaluation 6 without any limitations on swapping, swapping operations account for over 98% of the total search steps but for less than 5% of the improvements; the total translation time (with ICT) is about 34 CPU hours. For comparison, translating with a maximum swap segment size of 5 and a maximum swap distance of 2 takes ca. 40 minutes under otherwise unchanged circumstances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Restrictions on Word Reordering", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "It should be mentioned that in practice, it is generally not a good idea to run the decoder with without restrictions on swapping. In order to cope with hardware and time limitations, the sentences in the training data are typically limited in length. For example, the models used for the experiments reported here were trained on data with a sentence length limit of 40. Sentence pairs where one of the sentences exceeded this limit were ignored in training. Therefore, any swap that involves a distortion greater than that limit will result in the minimal (smoothed) distortion probability and most likely not lead to an improvement. The question is: How much swapping is enough? Is there any benefit to it at all? This is an interesting question since virtually all efficient MT decoders (e.g. Tillmann and Ney, 2000; Berger et al., 1994; Alshawi et al., 2000; Vidal, 1997) impose limits on word reordering.", |
|
"cite_spans": [ |
|
{ |
|
"start": 797, |
|
"end": 820, |
|
"text": "Tillmann and Ney, 2000;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 821, |
|
"end": 841, |
|
"text": "Berger et al., 1994;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 842, |
|
"end": 863, |
|
"text": "Alshawi et al., 2000;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 864, |
|
"end": 876, |
|
"text": "Vidal, 1997)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Restrictions on Word Reordering", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In order to determine the effect of swap restrictions on decoder performance, we translated the Chinese test corpus 101 times with restrictions on the maximum swap 6 100 short news texts; 878 text segments; ca. 25K tokens/words. distance (MSD) and the maximum swap segment size (MSSS) ranging from 0 to 10 and evaluated the translations with the BLEU 7 metric (Papineni et al., 2002) . The results are plotted in Fig. 5 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 360, |
|
"end": 383, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 413, |
|
"end": 419, |
|
"text": "Fig. 5", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Restrictions on Word Reordering", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "On the one hand, the plot seems to paint a pretty clear picture on the low end: score improvements are comparatively large initially but level off quickly. Furthermore, the slight slope suggests slow but continuous improvements as swap restrictions are eased. For the Arabic test data from the same evaluation, we obtained a similar shape (although with a roughly level plateau). On the other hand, the 'bumpiness' of the surface raises the question as to which of these differences are statistically We are aware of several ways to determine the statistical significance of BLEU score differences. One is bootstrap resampling (Efron and Tibshirani, 1993) 8 to determine confidence intervals, another one splitting the test corpus into a certain number of subcorpora (e.g. 30) and then using the t-test to compare the average scores over these subcorpora (cf. Papineni et al., 2001) . Bootstrap resampling for the various system outputs leads to very similar confidence intervals of about 0.006 to 0.007 for a one-sided test at a confidence level of .95. With the t-score method, differences in score of 0.008 or higher seem to be significant at the same level of confidence. According to these metrics, none of the differences in the plot are significant, although the shape of the plot suggests that moderate swapping probably is a good idea.", |
|
"cite_spans": [ |
|
{ |
|
"start": 627, |
|
"end": 655, |
|
"text": "(Efron and Tibshirani, 1993)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 855, |
|
"end": 882, |
|
"text": "(cf. Papineni et al., 2001)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Restrictions on Word Reordering", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In addition to limitations of the accuracy of the BLEU method itself, variance in the decoders performance can blur the picture. A third method to determine a confidence corridor is therefore to perform several randomized searches and compare their performance. Following a suggestion by Franz Josef Och (personal communications), we ran the decoder multiple times from randomized starting glosses for each sentence and then used the highest scoring one as the \"official\" system output. This gives us a lower bound on the price in performance that we pay for search errors. The results for up to ten searches from randomized starting points in addition to the baseline gloss are given in Tab. 1. Starting points were randomized by randomly picking one of the top 10 translation candidates (instead of the top candidate) for each input word, and performing a (small) random number of SWAP and INSERT operations before the actual search started. In order to insure consistency across repeated runs, we used a pseudo random function. In our experiments, we did not mix Choosing the best sentences from all decoder runs results in a BLEU score of 0.157. Interestingly, the decoding time from the default starting point is much lower (G1: ca. 40 min. vs. ca. 1 hour; G2: ca. 9.5 hours vs. ca. 11.3 hours), and the score, on average, is higher than when searching from a random starting point (G1: 0.143 vs. 0.127 (average); G2: 0.145 vs. 0.139 (average)). This indicates that the default seeding strategy is a good one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Restrictions on Word Reordering", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "From the results of our experiments we conclude the following.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Restrictions on Word Reordering", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "First, Tab. 1 suggests that there is a good correlation between IBM Model 4 scores and the BLEU metric. Higher alignment probabilities lead to higher BLEU scores. Even though hardly any of the score differences are statistically significant (see confidence intervals above), there seems to be a trend.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Restrictions on Word Reordering", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Secondly, from the swapping experiment we conclude that except for very local word reorderings, neither the IBM models nor the BLEU metric are able to recognize long distance dependencies (such as, for example, accounting for fundamental word order differences when translating from a SOV language into a SVO language). This is hardly surprising, since both the language model for decoding and the BLEU metric rely exclusively on ngrams. This explains why swapping helps so little. For a different approach that is based on dependency tree transformations, see Alshawi et al. (2000) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 561, |
|
"end": 582, |
|
"text": "Alshawi et al. (2000)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Restrictions on Word Reordering", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Thirdly, the results of our experiments with randomized searches show that greedy decoding does not perform as well on longer sentences as one might conclude from the findings in Germann et al. (2001) . At the same time, the speed improvements presented in this paper make multiple searches feasible, allowing for an overall faster and better decoder.", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 200, |
|
"text": "Germann et al. (2001)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Restrictions on Word Reordering", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this paper, we have analyzed the complexity of the greedy decoding algorithm originally presented in Germann et al. (2001) and presented improvements that drastically reduce the decoder's complexity and speed to practically linear time.", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 125, |
|
"text": "Germann et al. (2001)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Experimental data suggests a good correlation between IBM Model 4 scores and the BLEU metric. The speed improvements discussed in this paper make multiple randomized searches per sentence feasible, leading to a faster and better decoder for machine translation with IBM Model 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Technically, the complexity is still \" ! . However, the quadratic component has such a small coefficient that it does not have any noticable effect on the translation speed for all reasonable inputs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In practice, 16 with a trigram language model: a swap of two large segments over a large distance affects four points in the English hypothesis, resulting in U trigrams, plus four individual distortion probabilities.4 Thanks to Daniel Marcu for alerting us to this term in this context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Another possible explanation for this increase, especially at the left end, is that \"improvement clusters\" occur rarely enough not to occur at all in shorter sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In a nutshell, the BLEU score measures the n-gram overlap between system-produced test translations and a set of human reference translations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We are very grateful to Franz Josef Och for various very helpful comments on the work reported in this paper. This work was supported by DARPA-ITO grant N66001-00-1-9814.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "6" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Learning dependency translation models as collections of finite-state head transducers", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Alshawi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hiyan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shona", |
|
"middle": [], |
|
"last": "Douglas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Srinivas", |
|
"middle": [], |
|
"last": "Bangalore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computational Linguistics", |
|
"volume": "26", |
|
"issue": "1", |
|
"pages": "45--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alshawi, Hiyan, Douglas, Shona, and Bangalore, Srini- vas. 2000. Learning dependency translation models as collections of finite-state head transducers. Computa- tional Linguistics, 26(1):45-60.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The candide system for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Berger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Della", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Stephen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Della", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Gillet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harry", |
|
"middle": [], |
|
"last": "Printz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lubo\u0161", |
|
"middle": [], |
|
"last": "Ure\u0161", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the Arpa Workshop on Human Language Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Berger, Adam L., Brown, Peter F., Della Pietra, Stephen A., Della Pietra, Vincent J., Gillet, John R., Lafferty, John D., Mercer, Robert L., Printz, Harry, and Ure\u0161, Lubo\u0161. 1994. The candide system for machine translation. In: Proceedings of the Arpa Workshop on Human Language Technology.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Language translation apparatus and method using context-based translation models", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Berger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Della", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Stephen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Della", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Kehler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Berger, Adam L., Brown, Peter F., Della Pietra, Stephen A., Della Pietra, Vincent J., Kehler, An- drew S., and Mercer, Robert L. 1996. Language trans- lation apparatus and method using context-based trans- lation models. United States Patent 5,510,981.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A statistical approach to machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Cocke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Della", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Stephen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Della", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fredrick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Roossin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Computational Linguistics", |
|
"volume": "16", |
|
"issue": "2", |
|
"pages": "79--85", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brown, Peter F., Cocke, John, Della Pietra, Stephen A., Della Pietra, Vincent J., Jelinek, Fredrick, Lafferty, John D., Mercer, Robert L., and Roossin, Paul S. 1990. A statistical approach to machine translation. Compu- tational Linguistics, 16(2):79-85.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The mathematics of statistical machine translation: Parameter estimation", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Della", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Della", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Stephen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brown, Peter F., Della Pietra, Vincent J., Della Pietra, Stephen A., and Mercer, Robert L. 1993. The mathe- matics of statistical machine translation: Parameter es- timation. Computational Linguistics, 19(2):263-311.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "An Introduction to the Bootstrap", |
|
"authors": [ |
|
{ |
|
"first": "Bradley", |
|
"middle": [], |
|
"last": "Efron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Tibshirani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Efron, Bradley and Tibshirani, Robert J. 1993. An Intro- duction to the Bootstrap. Chapman & Hall/CRC.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Fast decoding and optimal decoding for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ulrich", |
|
"middle": [], |
|
"last": "Germann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jahr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kevin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 39th ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "228--235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Germann, Ulrich, Jahr, Michael, Knight, Kevin, Marcu, Daniel, and Yamada, Kenji. 2001. Fast decoding and optimal decoding for machine translation. In: Proceed- ings of the 39th ACL. Toulouse, France, 228-235.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Decoding complexity in wordreplacement translation models", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Computational Linguistics", |
|
"volume": "25", |
|
"issue": "4", |
|
"pages": "607--615", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Knight, Kevin. 1999. Decoding complexity in word- replacement translation models. Computational Lin- guistics, 25(4):607-615.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "An efficient A* search algorithm for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Franz", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Josef", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Ueffing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the ACL 2001 Workshop on Data-Driven Methods in Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--62", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Och, Franz Josef, Ueffing, Nicola, and Ney, Hermann. 2001. An efficient A* search algorithm for statistical machine translation. In: Proceedings of the ACL 2001 Workshop on Data-Driven Methods in Machine Trans- lation. Toulouse, France, 55-62.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Salim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Papineni, Kishore, Roukos, Salim, Ward, Todd, and Zhu, Wei-Jing. 2002. Bleu: a method for automatic eval- uation of machine translation. In: Proceedings of the 40th ACL. Philadelphia, PA, 311-318.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Salim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tood", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Papineni, Kishore, Roukos, Salim, Ward, Tood, and Zhu, Wei-Jing. 2001. Bleu: a method for automatic eval- uation of machine translation. Tech. Rep. RC22176 (W0109-022), IBM Research Division, Thomas J. Watson Research Center.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Word reordering and DP-based search in statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Tillmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 18th COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "850--856", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tillmann, Christoph and Ney, Hermann. 2000. Word re- ordering and DP-based search in statistical machine translation. In: Proceedings of the 18th COLING. Saarbr\u00fccken, Germany, 850-856.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Finite-state speech-to-speech translation", |
|
"authors": [ |
|
{ |
|
"first": "Enrique", |
|
"middle": [], |
|
"last": "Vidal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the 22nd ICASSP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "111--114", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vidal, Enrique. 1997. Finite-state speech-to-speech trans- lation. In: Proceedings of the 22nd ICASSP. Munich, Germany, 111-114.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Decoding algorithm in statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ye-Yi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Waibel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the 35th ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "366--372", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wang, Ye-Yi and Waibel, Alex. 1997. Decoding algo- rithm in statistical machine translation. In: Proceed- ings of the 35th ACL. Madrid, Spain, 366-372.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "How the IBM models model the translation process. This is a hypothetical example and not taken from any actual training or decoding logs.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "possible join operations for an English string consisting of non-zero-fertility words. seconds) global probability recalculations, no improvement tiling local probability calculations, no improvement tiling global probability calculations, with improvement tiling local probability calculations, with improvement tilingFigure 2: Runtimes for sentences of length 10-80. The graph shows the average runtimes (# \u00a2 $ ) of 10 different sample sentences of the respective length with swap operations restricted to a maximum swap segment size of 5 and a maximum swap distance of 2.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "Number of search iterations (left) and total number of alignments considered (right) during search in dependence of input length. The data is taken from the translation of the Chinese testset from the TIDES MT evaluation in June 2002. Translations were performed with a maximum swap distance of 2 and a maximum swap segment size of 5.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"text": "BLEUscores for the Chinese test set (# \u00a2 $ decoding) in dependence of maximum swap distance and maximum swap segment size.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF5": { |
|
"text": "Time consumption of the various change types in U and decoding (with 10 translations per input word considered, a list of 498 candidates for INSERT, a maximum swap distance of 2 and a maximum swap segment size of 5). The profiles shown are cumulative, so that the top curve reflects the total decoding time. To put the times for decoding in perspective, the dashed line in the lower plot reflects the total decoding time in U decoding. Operations not included in the figures consume so little time that their plots cannot be discerned in the graphs. The times shown are averages of 100 sentences each for length 10", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"3\">alignments checked: 1430 possible improvements: 28 improvements applied: 5 u.s. alignments checked: 1541 u.s. citizens computer system opposed the computer virus attack and rejecting service</td></tr><tr><td>possible improvements: improvements applied:</td><td>3 3</td><td>attack and there are various security loopholes publicize everywhere .</td></tr><tr><td colspan=\"2\">alignments checked: 768</td><td>u.s. citizens computer system opposed to the computer virus attack and rejecting service</td></tr><tr><td colspan=\"2\">possible improvements: 1</td><td>attack and there are various security loopholes publicize everywhere .</td></tr><tr><td colspan=\"2\">improvements applied: 1</td><td/></tr><tr><td colspan=\"2\">alignments checked: 364</td><td>u.s. citizens computer system is opposed to the computer virus attack and rejecting</td></tr><tr><td colspan=\"2\">possible improvements: 1</td><td>service attack and there are various security loopholes publicize everywhere .</td></tr><tr><td colspan=\"2\">improvements applied: 1</td><td/></tr><tr><td colspan=\"2\">alignments checked: 343</td><td>u.s. citizens computer system is opposed to the computer virus attack and rejecting service</td></tr><tr><td colspan=\"2\">possible improvements: 0</td><td>attack and there are various security loopholes publicize everywhere .</td></tr><tr><td colspan=\"2\">improvements applied: 0</td><td/></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"text": "Decoder performance on the June 2002 TIDES MT evluation test set with multiple searches from randomized starting points (MSD=2, MSSS=5). 2% 69.1% 61.2% 55.0% 48.3% 42.5% 36.6% 30.5% 23.9% 20.0% 13.6% * RSER = relative search error rate; percentage output sentences with suboptimal alignment probability significant.", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td/><td>default best of 2 searches</td><td>best of 3 searches</td><td>best of 4 searches</td><td>best of 5 searches</td><td>best of 6 searches</td><td>best of 7 searches</td><td>best of 8 searches</td><td>best of 9 searches</td><td>best of 10 searches</td><td>best of 11 searches</td></tr><tr><td>G1</td><td colspan=\"10\">BLEU 0.143 0.145 0.146 0.148 0.148 0.150 0.150 0.150 0.150 0.150 RSER * 93.7% 91.8% 89.8% 87.7% 86.1% 85.2% 83.9% 82.1% 81.2% 80.1% 77.9% 0.151</td></tr><tr><td>G2</td><td colspan=\"9\">BLEU 0.145 0.150 0.151 0.151 0.154 0.154 0.154 0.154 0.154 0.155 RSER 77.</td><td>0.156</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"text": "Thanks to Franz Josef Och for pointing this option out to us.", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td># \u00a2 $ and practical reason for this is that # decoding takes more # decoding. The \u00a1 \u00a1 than ten times as long as # decoding. As the table illus-\u00a2 $ trates, running multiple searches in # from randomized \u00a2 $ starting points is more efficient that running # \u00a1 once.</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |