Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C04-1030",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:19:54.695775Z"
},
"title": "Reordering Constraints for Phrase-Based Statistical Machine Translation",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ATR RWTH Aachen University",
"location": {
"settlement": "Germany Kyoto",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ATR RWTH Aachen University",
"location": {
"settlement": "Germany Kyoto",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ATR RWTH Aachen University",
"location": {
"settlement": "Germany Kyoto",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ATR RWTH Aachen University",
"location": {
"settlement": "Germany Kyoto",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "",
"middle": [],
"last": "Lehrstuhl",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ATR RWTH Aachen University",
"location": {
"settlement": "Germany Kyoto",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Informatik",
"middle": [],
"last": "Vi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ATR RWTH Aachen University",
"location": {
"settlement": "Germany Kyoto",
"country": "Japan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In statistical machine translation, the generation of a translation hypothesis is computationally expensive. If arbitrary reorderings are permitted, the search problem is NP-hard. On the other hand, if we restrict the possible reorderings in an appropriate way, we obtain a polynomial-time search algorithm. We investigate different reordering constraints for phrase-based statistical machine translation, namely the IBM constraints and the ITG constraints. We present efficient dynamic programming algorithms for both constraints. We evaluate the constraints with respect to translation quality on two Japanese-English tasks. We show that the reordering constraints improve translation quality compared to an unconstrained search that permits arbitrary phrase reorderings. The ITG constraints preform best on both tasks and yield statistically significant improvements compared to the unconstrained search.",
"pdf_parse": {
"paper_id": "C04-1030",
"_pdf_hash": "",
"abstract": [
{
"text": "In statistical machine translation, the generation of a translation hypothesis is computationally expensive. If arbitrary reorderings are permitted, the search problem is NP-hard. On the other hand, if we restrict the possible reorderings in an appropriate way, we obtain a polynomial-time search algorithm. We investigate different reordering constraints for phrase-based statistical machine translation, namely the IBM constraints and the ITG constraints. We present efficient dynamic programming algorithms for both constraints. We evaluate the constraints with respect to translation quality on two Japanese-English tasks. We show that the reordering constraints improve translation quality compared to an unconstrained search that permits arbitrary phrase reorderings. The ITG constraints preform best on both tasks and yield statistically significant improvements compared to the unconstrained search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In statistical machine translation, we are given a source language ('French') sentence f J 1 = f 1 . . . f j . . . f J , which is to be translated into a target language ('English') sentence e I 1 = e 1 . . . e i . . . e I . Among all possible target language sentences, we will choose the sentence with the highest probability: This decomposition into two knowledge sources is known as the source-channel approach to statistical machine translation (Brown et al., 1990) . It allows an independent modeling of target language model P r(e I 1 ) and translation model P r(f J 1 |e I 1 ). The target language model describes the well-formedness of the target language sentence. The translation model links the source language sentence to the target language sentence. It can be further decomposed into alignment and lexicon model. The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language. We have to maximize over all possible target language sentences.",
"cite_spans": [
{
"start": 450,
"end": 470,
"text": "(Brown et al., 1990)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "e I 1 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An alternative to the classical sourcechannel approach is the direct modeling of the posterior probability P r(e I 1 |f J 1 ). Using a loglinear model (Och and Ney, 2002) , we obtain:",
"cite_spans": [
{
"start": 151,
"end": 170,
"text": "(Och and Ney, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "P r(e I 1 |f J 1 ) = exp M m=1 \u03bb m h m (e I 1 , f J 1 ) \u2022 Z(f J 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here, Z(f J 1 ) denotes the appropriate normalization constant. As a decision rule, we obtain:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "e I 1 = argmax e I 1 M m=1 \u03bb m h m (e I 1 , f J 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This approach is a generalization of the source-channel approach. It has the advantage that additional models or feature functions can be easily integrated into the overall system. The model scaling factors \u03bb M 1 are trained according to the maximum entropy principle, e.g. using the GIS algorithm. Alternatively, one can train them with respect to the final translation quality measured by some error criterion (Och, 2003) .",
"cite_spans": [
{
"start": 412,
"end": 423,
"text": "(Och, 2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we will investigate the reordering problem for phrase-based translation approaches. As the word order in source and target language may differ, the search algorithm has to allow certain reorderings. If arbitrary reorderings are allowed, the search problem is NP-hard (Knight, 1999) . To obtain an efficient search algorithm, we can either restrict the possible reorderings or we have to use an approximation algorithm. Note that in the latter case we cannot guarantee to find an optimal solution.",
"cite_spans": [
{
"start": 282,
"end": 296,
"text": "(Knight, 1999)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remaining part of this work is structured as follows: in the next section, we will review the baseline translation system, namely the alignment template approach. Afterward, we will describe different reordering constraints. We will begin with the IBM constraints for phrase-based translation. Then, we will describe constraints based on inversion transduction grammars (ITG). In the following, we will call these the ITG constraints. In Section 4, we will present results for two Japanese-English translation tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we give a brief description of the translation system, namely the alignment template approach. The key elements of this translation approach (Och et al., 1999) are the alignment templates. These are pairs of source and target language phrases with an alignment within the phrases. The alignment templates are build at the level of word classes. This improves the generalization capability of the alignment templates.",
"cite_spans": [
{
"start": 158,
"end": 176,
"text": "(Och et al., 1999)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Template Approach",
"sec_num": "2"
},
{
"text": "We use maximum entropy to train the model scaling factors (Och and Ney, 2002) . As feature functions we use a phrase translation model as well as a word translation model. Additionally, we use two language model feature functions: a word-based trigram model and a class-based five-gram model. Furthermore, we use two heuristics, namely the word penalty and the alignment template penalty. To model the alignment template reorderings, we use a feature function that penalizes reorderings linear in the jump width.",
"cite_spans": [
{
"start": 58,
"end": 77,
"text": "(Och and Ney, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Template Approach",
"sec_num": "2"
},
{
"text": "A dynamic programming beam search algorithm is used to generate the translation hypothesis with maximum probability. This search algorithm allows for arbitrary reorderings at the level of alignment templates. Within the alignment templates, the reordering is learned in training and kept fix during the search process. There are no constraints on the reorderings within the alignment templates. This is only a brief description of the alignment template approach. For further details, see (Och et al., 1999; Och and Ney, 2002) .",
"cite_spans": [
{
"start": 489,
"end": 507,
"text": "(Och et al., 1999;",
"ref_id": "BIBREF8"
},
{
"start": 508,
"end": 526,
"text": "Och and Ney, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Template Approach",
"sec_num": "2"
},
{
"text": "Although unconstrained reordering looks perfect from a theoretical point of view, we find that in practice constrained reordering shows better performance. The possible advantages of reordering constraints are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering Constraints",
"sec_num": "3"
},
{
"text": "1. The search problem is simplified. As a result there are fewer search errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering Constraints",
"sec_num": "3"
},
{
"text": "2. Unconstrained reordering is only helpful if we are able to estimate the reordering probabilities reliably, which is unfortunately not the case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering Constraints",
"sec_num": "3"
},
{
"text": "In this section, we will describe two variants of reordering constraints. The first constraints are based on the IBM constraints for singleword based translation models. The second constraints are based on ITGs. In the following, we will use the term \"phrase\" to mean either a sequence of words or a sequence of word classes as used in the alignment templates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering Constraints",
"sec_num": "3"
},
{
"text": "In this section, we describe restrictions on the phrase reordering in spirit of the IBM constraints (Berger et al., 1996) . First, we briefly review the IBM constraints at the word level. The target sentence is produced word by word. We keep a coverage vector to mark the already translated (covered) source positions. The next target word has to be the translation of one of the first k uncovered, i.e. not translated, source positions. The IBM constraints are illustrated in Figure 1 . For further details see e.g. (Tillmann and Ney, 2003) .",
"cite_spans": [
{
"start": 100,
"end": 121,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF0"
},
{
"start": 517,
"end": 541,
"text": "(Tillmann and Ney, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 477,
"end": 485,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "IBM Constraints",
"sec_num": "3.1"
},
{
"text": "For the phrase-based translation approach, we use the same idea. The target sentence is produced phrase by phrase. Now, we allow skipping of up to k phrases. If we set k = 0, we obtain a search that is monotone at the phrase level as a special case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Constraints",
"sec_num": "3.1"
},
{
"text": "The search problem can be solved using dynamic programming. We define a auxiliary function Q(j, S, e). Here, the source position j is the first unprocessed source position; with unprocessed, we mean this source position is neither translated nor skipped. We use the set S = {(j n , l n )|n = 1, ..., N } to keep track of the skipped source phrases with lengths l n and starting positions j n . We show the formulae for a bigram language model and use the target language word e to keep track of the language model history. The symbol $ is used to mark the sentence start and the sentence end. The extension to higher-order n-gram language models is straightforward. We use M to denote the maximum phrase length in the source language. We obtain the following dynamic programming equations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Constraints",
"sec_num": "3.1"
},
{
"text": "Q(1, \u2205, $) = 1 Q(j, S, e) = max max e ,\u1ebd max j\u2212M \u2264j <j Q(j , S, e ) \u2022 p(f j\u22121 j |\u1ebd) \u2022 p(\u1ebd|e ), max (j ,l)\u2208S S=S \\{(j ,l)} Q(j, S , e ) \u2022 p(f j +l\u22121 j |\u1ebd) \u2022 p(\u1ebd|e ) , max j\u2212M \u2264j <j S :S=S \u222a{(j ,j\u2212j )}\u2227|S |<k Q(j , S , e) Q(J + 2, \u2205, $) = max e Q(J + 1, \u2205, e) \u2022 p($|e)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Constraints",
"sec_num": "3.1"
},
{
"text": "In the recursion step, we have distinguished three cases: in the first case, we translate the next source phrase. This is the same expansion that is done in monotone search. In the second case, we translate a previously skipped phrase and in the third case we skip a source phrase. For notational convenience, we have omitted one constraint in the preceding equations: the final word of the target phrase\u1ebd is the new language model state e (using a bigram language model). Now, we analyze the complexity of this algorithm. Let E denote the vocabulary size of the target language and let\u1ebc denote the maximum number of phrase translation candidates for a given source phrase. Then, J",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Constraints",
"sec_num": "3.1"
},
{
"text": "\u2022(J \u2022M ) k \u2022E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Constraints",
"sec_num": "3.1"
},
{
"text": "is an upper bound for the size of the Q-table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Constraints",
"sec_num": "3.1"
},
{
"text": "Once we have fixed a specific element of this table, the maximization steps can be done in 1) ). Therefore, the complexity of this algorithm is in setting k = 0 results in a search algorithm that is monotone at the phrase level.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 93,
"text": "1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "IBM Constraints",
"sec_num": "3.1"
},
{
"text": "O(E \u2022\u1ebc \u2022 (M + k \u2212 1) + (k \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Constraints",
"sec_num": "3.1"
},
{
"text": "O(J \u2022(J \u2022M ) k \u2022E \u2022(E \u2022\u1ebc \u2022(M +k \u22121)+(k \u22121))). Assuming k < M , this can be simplified to O((J \u2022 M ) k+1 \u2022 E 2 \u2022\u1ebc). As already mentioned,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Constraints",
"sec_num": "3.1"
},
{
"text": "In this section, we describe the ITG constraints (Wu, 1995; Wu, 1997) . Here, we interpret the input sentence as a sequence of blocks. In the beginning, each alignment template is a block of its own. Then, the reordering process can be interpreted as follows: we select two consecutive blocks and merge them to a single block by choosing between two options: either keep the target phrases in monotone order or invert the order. This idea is illustrated in Figure 2. The dark boxes represent the two blocks to be merged. Once two blocks are merged, they are treated as a single block and they can be only merged further as a whole. It is not allowed to merge one of the subblocks again.",
"cite_spans": [
{
"start": 49,
"end": 59,
"text": "(Wu, 1995;",
"ref_id": "BIBREF14"
},
{
"start": 60,
"end": 69,
"text": "Wu, 1997)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 457,
"end": 463,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "ITG Constraints",
"sec_num": "3.2"
},
{
"text": "The ITG constraints allow for a polynomialtime search algorithm. It is based on the following dynamic programming recursion equations. During the search a table Q j l ,j r ,e b ,e t is constructed. Here, Q j l ,j r ,e b ,e t denotes the probability of the best hypothesis translating the source words from position j l (left) to position j r (right) which begins with the target language word e b (bottom) and ends with the word e t (top). This is illustrated in Figure 3 . The initialization is done with the phrasebased model described in Section 2. We introduce a new parameter p m (m= monotone), which denotes the probability of a monotone combination of two partial hypotheses. Here, we formulate the recursion equation for a bigram language model, but of course, the same method can also be applied for a trigram lan- The resulting algorithm is similar to the CYKparsing algorithm. It has a worst-case complexity of O(J 3 \u2022 E 4 ). Here, J is the length of the source sentence and E is the vocabulary size of the target language.",
"cite_spans": [],
"ref_spans": [
{
"start": 463,
"end": 471,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Dynamic Programming Algorithm",
"sec_num": "3.2.1"
},
{
"text": "For the ITG constraints a dynamic programming search algorithm exists as described in the previous section. It would be more practical with respect to language model recombination to have an algorithm that generates the target sentence word by word or phrase by phrase. The idea is to start with the beam search decoder for unconstrained search and modify it in such a way that it will produce only reorderings that do not violate the ITG constraints. Now, we describe one way to obtain such a decoder. It has been pointed out in (Zens and Ney, 2003) that the ITG constraints can be characterized as follows: a reordering violates the ITG constraints if and only if it contains (3, 1, 4, 2) or (2, 4, 1, 3) as a subsequence. This means, if we select four columns and the corresponding rows from the alignment matrix and we obtain one of the two patterns illustrated in Figure 4 , this reordering cannot be generated with the ITG constraints.",
"cite_spans": [
{
"start": 530,
"end": 550,
"text": "(Zens and Ney, 2003)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 869,
"end": 877,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Beam Search Algorithm",
"sec_num": "3.2.2"
},
{
"text": "Now, we have to modify the beam search decoder such that it cannot produce these two patterns. We implement this in the following way. During the search, we have a coverage vector cov of the source sentence available for each partial hypothesis. A coverage vec- tor is a binary vector marking the source sentence words that have already been translated (covered). Additionally, we know the current source sentence position j c and a candidate source sentence position j n to be translated next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam Search Algorithm",
"sec_num": "3.2.2"
},
{
"text": "To avoid the patterns in Figure 4 , we have to constrain the placement of the third phrase, because once we have placed the first three phrases we also have determined the position of the fourth phrase as the remaining uncovered position. Thus, we check the following constraints:",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Beam Search Algorithm",
"sec_num": "3.2.2"
},
{
"text": "case a) j n < j c (1) \u2200j n < j < j c : cov[j] \u2192 cov[j + 1] case b) j c < j n (2) \u2200j c < j < j n : cov[j] \u2192 cov[j \u2212 1]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam Search Algorithm",
"sec_num": "3.2.2"
},
{
"text": "The constraints in Equations 1 and 2 enforce the following: imagine, we traverse the coverage vector cov from the current position j c to the position to be translated next j n . Then, it is not allowed to move from an uncovered position to a covered one. Now, we sketch the proof that these constraints are equivalent to the ITG constraints. It is easy to see that the constraint in Equation 1 avoids the pattern on the left-hand side in Figure 4 . To be precise: after placing the first two phrases at (b,1) and (d,2) , it avoids the placement of the third phrase at (a,3) . Similarly, the constraint in Equation 2 avoid the pattern on the right-hand side in Figure 4 . Therefore, if we enforce the constraints in Equation 1 and Equation 2, we cannot violate the ITG constraints.",
"cite_spans": [
{
"start": 514,
"end": 519,
"text": "(d,2)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 439,
"end": 447,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 559,
"end": 574,
"text": "phrase at (a,3)",
"ref_id": "FIGREF2"
},
{
"start": 661,
"end": 669,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Beam Search Algorithm",
"sec_num": "3.2.2"
},
{
"text": "We still have to show that we can generate all the reorderings that do not violate the ITG constraints. Equivalently, we show that any reordering that violates the constraints in Equation 1 or Equation 2 will also violate the ITG constraints. It is rather easy to see that any reordering that violates the constraint in Figure 4 . The conditions to violate Equation 1 are the following: the new candidate position j n is to the left of the current position j c , e.g. positions (a) and (d) . Somewhere in between there has to be an covered position j whose successor position j + 1 is uncovered, e.g. (b) and (c). Therefore, any reordering that violates Equation 1 generates the pattern on the left-hand side in Figure 4 , thus it violates the ITG constraints.",
"cite_spans": [
{
"start": 486,
"end": 489,
"text": "(d)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 320,
"end": 328,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 712,
"end": 720,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Beam Search Algorithm",
"sec_num": "3.2.2"
},
{
"text": "To investigate the effect of reordering constraints, we have chosen two Japanese-English tasks, because the word order in Japanese and English is rather different. The first task is the Basic Travel Expression Corpus (BTEC) task (Takezawa et al., 2002) . The corpus statistics are shown in Table 1 . This corpus consists of phrasebook entries. The second task is the Spoken Language DataBase (SLDB) task (Morimoto et al., 1994) . This task consists of transcription of spoken dialogs in the domain of hotel reservation. Here, we use domain-specific training data in addition to the BTEC corpus. The corpus statistics of this additional corpus are shown in Table 2 . The development corpus is the same for both tasks.",
"cite_spans": [
{
"start": 229,
"end": 252,
"text": "(Takezawa et al., 2002)",
"ref_id": "BIBREF11"
},
{
"start": 404,
"end": 427,
"text": "(Morimoto et al., 1994)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 290,
"end": 297,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 656,
"end": 663,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Corpus Statistics",
"sec_num": "4.1"
},
{
"text": "Criteria WER (word error rate). The WER is computed as the minimum number of substitution, insertion and deletion operations that have to be performed to convert the generated sentence into the reference sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "PER (position-independent word error rate). A shortcoming of the WER is that it requires a perfect word order. The word order of an acceptable sentence can be different from that of the target sentence, so that the WER measure alone could be misleading. The PER compares the words in the two sentences ignoring the word order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "BLEU. This score measures the precision of unigrams, bigrams, trigrams and fourgrams with respect to a reference translation with a penalty for too short sentences (Papineni et al., 2002) . The BLEU score measures accuracy, i.e. large BLEU scores are better.",
"cite_spans": [
{
"start": 164,
"end": 187,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "NIST. This score is similar to BLEU. It is a weighted n-gram precision in combination with a penalty for too short sentences (Doddington, 2002) . The NIST score measures accuracy, i.e. large NIST scores are better.",
"cite_spans": [
{
"start": 125,
"end": 143,
"text": "(Doddington, 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "Note that for each source sentence, we have as many as 16 references available. We compute all the preceding criteria with respect to multiple references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "In Table 3 and Table 4 , we show the translation results for the BTEC task. First, we observe that the overall quality is rather high on this task. The average length of the used alignment templates is about five source words in all systems. The monotone search (mon) shows already good performance on short sentences with less than 10 words. We conclude that for short sentences the reordering is captured within the alignment templates. On the other hand, the monotone search degrades for long sentences with at least 10 words resulting in a WER of 16.6% for these sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 22,
"text": "Table 3 and Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "System Comparison",
"sec_num": "4.3"
},
{
"text": "We present the results for various nonmonotone search variants: the first one is with the IBM constraints (skip) as described in Section 3.1. We allow for skipping one or two phrases. Our experiments showed that if we set the maximum number of phrases to be skipped to three or more the translation results are equivalent to the search without any reordering constraints (free). The results for the ITG constraints as described in Section 3.2 are also presented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Comparison",
"sec_num": "4.3"
},
{
"text": "The unconstrained reorderings improve the total translation quality down to a WER of 11.5%. We see that especially the long sentences benefit from the reorderings resulting in an improvement from 16.6% to 13.8%. Comparing the results for the free reorderings and the ITG reorderings, we see that the ITG system always outperforms the unconstrained system. The improvement on the whole test set is statistically significant at the 95% level. 1 In Table 5 and Table 6 , we show the results for the SLDB task. First, we observe that the overall quality is lower than for the BTEC task. The SLDB task is a spoken language translation task and the training corpus for spoken language is rather small. This is also reflected in the average length of the used alignment templates that is about three source words compared to about five words for the BTEC task. The results on this task are similar to the results on the BTEC task. Again, the ITG constraints perform best. Here, the improvement compared to the unconstrained search is statistically significant at the 99% level. Compared to the monotone search, the BLEU score for the ITG constraints improves from 54.4% to 57.1%.",
"cite_spans": [],
"ref_spans": [
{
"start": 446,
"end": 465,
"text": "Table 5 and Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "System Comparison",
"sec_num": "4.3"
},
{
"text": "Recently, phrase-based translation approaches became more and more popular. Marcu and Wong (2002) present a joint probability model for phrase-based translation. In (Koehn et al., 2003) , various aspects of phrase-based systems are compared, e.g. the phrase extraction method, the underlying word alignment model, or the maximum phrase length.",
"cite_spans": [
{
"start": 76,
"end": 97,
"text": "Marcu and Wong (2002)",
"ref_id": "BIBREF5"
},
{
"start": 165,
"end": 174,
"text": "(Koehn et",
"ref_id": null
},
{
"start": 175,
"end": 185,
"text": "al., 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In (Vogel, 2003) , a phrase-based system is used that allows reordering within a window of up to three words. Improvements for a Chinese-English task are reported compared to a monotone search. The ITG constraints were introduced in (Wu, 1995) . The applications were, for instance, the segmentation of Chinese character sequences into Chinese words and the bracketing of the source sentence into sub-sentential chunks. Investigations on the IBM constraints (Berger et al., 1996) for single-word based statistical machine translation can be found e.g. in (Tillmann and Ney, 2003) . A comparison of the ITG constraints and the IBM constraints for single-word based models can be found in (Zens and Ney, 2003) . In this work, we investigated these reordering constraints for phrasebased statistical machine translation.",
"cite_spans": [
{
"start": 3,
"end": 16,
"text": "(Vogel, 2003)",
"ref_id": "BIBREF13"
},
{
"start": 233,
"end": 243,
"text": "(Wu, 1995)",
"ref_id": "BIBREF14"
},
{
"start": 458,
"end": 479,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF0"
},
{
"start": 555,
"end": 579,
"text": "(Tillmann and Ney, 2003)",
"ref_id": "BIBREF12"
},
{
"start": 687,
"end": 707,
"text": "(Zens and Ney, 2003)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We have presented different reordering constraints for phrase-based statistical machine translation, namely the IBM constraints and the ITG constraints, as well as efficient dynamic programming algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Translation results were reported for two Japanese-English translation tasks. Both type of reordering constraints resulted in improvements compared to a monotone search. Restricting the reorderings according to the IBM constraints resulted already in a translation quality similar to an unconstrained search. The translation results with the ITG constraints even outperformed the unconstrained search consistently on all error criteria. The improvements have been found statistically significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "The ITG constraints showed the best performance on both tasks. Therefore we plan to further improve this method. Currently, the probability model for the ITG constraints is very simple. More sophisticated models, such as phrase dependent inversion probabilities, might be promising.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "The statistical significance test were done for the WER using boostrap resampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially done at the Spoken Language Translation Research Laboratories (SLT) at the Advanced Telecommunication Research Institute International (ATR), Kyoto, Japan. This research was supported in part by the Telecommunications Advancement Organization of Japan. This work has been partially funded by the EU project PF-Star, IST-2001-37599. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Language translation apparatus and method of using context-based translation models, United States patent",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Gillett",
"suffix": ""
},
{
"first": "A",
"middle": [
"S"
],
"last": "Kehler",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. L. Berger, P. F. Brown, S. A. D. Pietra, V. J. D. Pietra, J. R. Gillett, A. S. Kehler, and R. L. Mercer. 1996. Language translation apparatus and method of using context-based translation models, United States patent, patent number 5510981, April.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A statistical approach to machine translation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cocke",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "P",
"middle": [
"S"
],
"last": "Roossin",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "2",
"pages": "79--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, J. Cocke, S. A. Della Pietra, V. J. Della Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin. 1990. A statisti- cal approach to machine translation. Compu- tational Linguistics, 16(2):79-85, June.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic evaluation of machine translation quality using n-gram cooccurrence statistics",
"authors": [
{
"first": "G",
"middle": [],
"last": "Doddington",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ARPA Workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In Proc. ARPA Workshop on Human Language Technology.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Decoding complexity in wordreplacement translation models",
"authors": [
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "4",
"pages": "607--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Knight. 1999. Decoding complexity in word- replacement translation models. Computational Linguistics, 25(4):607-615, December.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Koehn",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the Human Language Technology Conf. (HLT-NAACL)",
"volume": "",
"issue": "",
"pages": "127--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, F. J. Och, and D. Marcu. 2003. Sta- tistical phrase-based translation. In Proc. of the Human Language Technology Conf. (HLT- NAACL), pages 127-133, Edmonton, Canada, May/June.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A phrase-based, joint probability model for statistical machine translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. Conf. on Empirical Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "133--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marcu and W. Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In Proc. Conf. on Empirical Meth- ods for Natural Language Processing, pages 133- 139, Philadelphia, PA, July.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A speech and language database for speech translation research",
"authors": [
{
"first": "T",
"middle": [],
"last": "Morimoto",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Uratani",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Furuse",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sobashima",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sagisaka",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Higuchi",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yamazaki",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. of the 3rd Int. Conf. on Spoken Language Processing (ICSLP'94)",
"volume": "",
"issue": "",
"pages": "1791--1794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Morimoto, N. Uratani, T. Takezawa, O. Furuse, Y. Sobashima, H. Iida, A. Nakamura, Y. Sag- isaka, N. Higuchi, and Y. Yamazaki. 1994. A speech and language database for speech trans- lation research. In Proc. of the 3rd Int. Conf. on Spoken Language Processing (ICSLP'94), pages 1791-1794, Yokohama, Japan, September.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "295--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och and H. Ney. 2002. Discriminative train- ing and maximum entropy models for statisti- cal machine translation. In Proc. of the 40th Annual Meeting of the Association for Com- putational Linguistics (ACL), pages 295-302, Philadelphia, PA, July.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improved alignment models for statistical machine translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of the Joint SIGDAT Conf. on Empirical Methods in Natural Language Processing and Very Large Corpora",
"volume": "",
"issue": "",
"pages": "20--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Och, C. Tillmann, and H. Ney. 1999. Im- proved alignment models for statistical machine translation. In Proc. of the Joint SIGDAT Conf. on Empirical Methods in Natural Language Pro- cessing and Very Large Corpora, pages 20-28, University of Maryland, College Park, MD, June.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the 41th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of the 41th Annual Meeting of the Association for Computational Linguistics (ACL), pages 160- 167, Sapporo, Japan, July.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W",
"middle": [
"J"
],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W. J. Zhu. 2002. Bleu: a method for automatic evalua- tion of machine translation. In Proc. of the 40th Annual Meeting of the Association for Com- putational Linguistics (ACL), pages 311-318, Philadelphia, PA, July.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Toward a broadcoverage bilingual corpus for speech translation of travel conversations in the real world",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sugaya",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the Third Int. Conf. on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "147--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Takezawa, E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto. 2002. Toward a broad- coverage bilingual corpus for speech translation of travel conversations in the real world. In Proc. of the Third Int. Conf. on Language Re- sources and Evaluation (LREC), pages 147-152, Las Palmas, Spain, May.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Word reordering and a dynamic programming beam search algorithm for statistical machine translation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "97--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Tillmann and H. Ney. 2003. Word reordering and a dynamic programming beam search algo- rithm for statistical machine translation. Com- putational Linguistics, 29(1):97-133, March.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "SMT decoder dissected: Word reordering",
"authors": [
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the Int. Conf. on Natural Language Processing and Knowledge Engineering (NLP-KE)",
"volume": "",
"issue": "",
"pages": "561--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Vogel. 2003. SMT decoder dissected: Word re- ordering. In Proc. of the Int. Conf. on Natural Language Processing and Knowledge Engineer- ing (NLP-KE), pages 561-566, Beijing, China, October.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Stochastic inversion transduction grammars, with application to segmentation, bracketing, and alignment of parallel corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. of the 14th International Joint Conf. on Artificial Intelligence (IJCAI)",
"volume": "",
"issue": "",
"pages": "1328--1334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Wu. 1995. Stochastic inversion transduction grammars, with application to segmentation, bracketing, and alignment of parallel corpora. In Proc. of the 14th International Joint Conf. on Artificial Intelligence (IJCAI), pages 1328- 1334, Montreal, August.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel cor- pora. Computational Linguistics, 23(3):377- 403, September.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A comparative study on reordering constraints in statistical machine translation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the 41th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "144--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Zens and H. Ney. 2003. A comparative study on reordering constraints in statistical machine translation. In Proc. of the 41th Annual Meet- ing of the Association for Computational Lin- guistics (ACL), pages 144-151, Sapporo, Japan, July.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Illustration of the IBM constraints with k = 3, i.e. up to three positions may be skipped.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Illustration of monotone and inverted concatenation of two consecutive blocks.",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Illustration of the Q-table. guage model. Q j l ,j r ,e b ,e t = max j l \u2264k<j r , e ,e",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Illustration of the two reordering patterns that violate the ITG constraints.",
"num": null,
"type_str": "figure"
},
"TABREF2": {
"text": "Statistics of the BTEC corpus.",
"num": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">Japanese English</td></tr><tr><td colspan=\"2\">train Sentences</td><td>152 K</td></tr><tr><td/><td>Words</td><td>1 044 K</td><td>893 K</td></tr><tr><td/><td>Vocabulary</td><td>17 047</td><td>12 020</td></tr><tr><td>dev</td><td>sentences</td><td>500</td></tr><tr><td/><td>words</td><td>3 361</td><td>2 858</td></tr><tr><td>test</td><td>sentences</td><td>510</td></tr><tr><td/><td>words</td><td>3 498</td><td>-</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF3": {
"text": "Statistics of the SLDB corpus.",
"num": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">Japanese English</td></tr><tr><td colspan=\"2\">train Sentences</td><td>15 K</td></tr><tr><td/><td>Words</td><td>201 K</td><td>190 K</td></tr><tr><td/><td>Vocabulary</td><td>4 757</td><td>3 663</td></tr><tr><td>test</td><td>sentences</td><td>330</td></tr><tr><td/><td>words</td><td>3 940</td><td>-</td></tr><tr><td colspan=\"4\">Equation 1 will generate the pattern on the</td></tr><tr><td colspan=\"2\">left-hand side in</td><td/></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF4": {
"text": "Translation performance WER[%] for the BTEC task (510 sentences). Sentence lengths: short: < 10 words, long: \u2265 10 words; times in milliseconds per sentence.",
"num": null,
"content": "<table><tr><td/><td>WER[%]</td><td/></tr><tr><td/><td colspan=\"2\">sentence length</td></tr><tr><td colspan=\"2\">reorder short long</td><td colspan=\"2\">all time[ms]</td></tr><tr><td>mon</td><td colspan=\"2\">11.4 16.6 12.7</td><td>73</td></tr><tr><td>skip 1</td><td colspan=\"2\">10.8 13.5 11.4</td><td>134</td></tr><tr><td>2</td><td colspan=\"2\">10.8 13.4 11.4</td><td>169</td></tr><tr><td>free</td><td colspan=\"2\">10.8 13.8 11.5</td><td>194</td></tr><tr><td>ITG</td><td colspan=\"2\">10.6 12.2 11.0</td><td>164</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF5": {
"text": "",
"num": null,
"content": "<table><tr><td colspan=\"5\">: Translation performance for the</td></tr><tr><td colspan=\"3\">BTEC task (510 sentences).</td><td/><td/></tr><tr><td/><td colspan=\"4\">error rates[%] accuracy measures</td></tr><tr><td colspan=\"4\">reorder WER PER BLEU[%]</td><td>NIST</td></tr><tr><td>mon</td><td>12.7</td><td>10.6</td><td>86.8</td><td>14.14</td></tr><tr><td>skip 1</td><td>11.4</td><td>10.1</td><td>88.0</td><td>14.19</td></tr><tr><td>2</td><td>11.4</td><td>10.1</td><td>88.1</td><td>14.20</td></tr><tr><td>free</td><td>11.5</td><td>10.0</td><td>88.0</td><td>14.19</td></tr><tr><td>ITG</td><td>11.0</td><td>9.9</td><td>88.2</td><td>14.25</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF6": {
"text": "Translation performance WER[%] for the SLDB task (330 sentences). Sentence lengths: short: < 10 words, long: \u2265 10 words; times in milliseconds per sentence.",
"num": null,
"content": "<table><tr><td/><td>WER[%]</td><td/></tr><tr><td/><td colspan=\"2\">sentence length</td></tr><tr><td colspan=\"2\">reorder short long</td><td colspan=\"2\">all time[ms]</td></tr><tr><td>mon</td><td colspan=\"2\">32.0 52.6 48.1</td><td>911</td></tr><tr><td>skip 1</td><td colspan=\"2\">31.9 51.1 46.9</td><td>3 175</td></tr><tr><td>2</td><td colspan=\"2\">32.0 51.4 47.2</td><td>4 549</td></tr><tr><td>free</td><td colspan=\"2\">32.0 51.4 47.2</td><td>4 993</td></tr><tr><td>ITG</td><td colspan=\"2\">31.8 50.9 46.7</td><td>4 472</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF7": {
"text": "",
"num": null,
"content": "<table><tr><td colspan=\"5\">: Translation performance for the</td></tr><tr><td colspan=\"3\">SLDB task (330 sentences).</td><td/><td/></tr><tr><td/><td colspan=\"4\">error rates[%] accuracy measures</td></tr><tr><td colspan=\"4\">reorder WER PER BLEU[%]</td><td>NIST</td></tr><tr><td>mon</td><td>48.1</td><td>35.5</td><td>54.4</td><td>9.45</td></tr><tr><td>skip 1</td><td>46.9</td><td>35.0</td><td>56.8</td><td>9.71</td></tr><tr><td>2</td><td>47.2</td><td>35.1</td><td>57.1</td><td>9.74</td></tr><tr><td>free</td><td>47.2</td><td>34.9</td><td>57.1</td><td>9.75</td></tr><tr><td>ITG</td><td>46.7</td><td>34.6</td><td>57.1</td><td>9.76</td></tr></table>",
"html": null,
"type_str": "table"
}
}
}
}