Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N04-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:44:25.436438Z"
},
"title": "Improvements in Phrase-Based Statistical Machine Translation",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "RWTH Aachen University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "RWTH Aachen University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In statistical machine translation, the currently best performing systems are based in some way on phrases or word groups. We describe the baseline phrase-based translation system and various refinements. We describe a highly efficient monotone search algorithm with a complexity linear in the input sentence length. We present translation results for three tasks: Verbmobil, Xerox and the Canadian Hansards. For the Xerox task, it takes less than 7 seconds to translate the whole test set consisting of more than 10K words. The translation results for the Xerox and Canadian Hansards task are very promising. The system even outperforms the alignment template system. K k=1 p(f k |\u1ebd k) We use the maximum approximation for the hidden variable S. Therefore, the feature functions are dependent on S. Although the number of phrases K is implicitly given by the segmentation S, we used both S and K to make this dependency more obvious.",
"pdf_parse": {
"paper_id": "N04-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "In statistical machine translation, the currently best performing systems are based in some way on phrases or word groups. We describe the baseline phrase-based translation system and various refinements. We describe a highly efficient monotone search algorithm with a complexity linear in the input sentence length. We present translation results for three tasks: Verbmobil, Xerox and the Canadian Hansards. For the Xerox task, it takes less than 7 seconds to translate the whole test set consisting of more than 10K words. The translation results for the Xerox and Canadian Hansards task are very promising. The system even outperforms the alignment template system. K k=1 p(f k |\u1ebd k) We use the maximum approximation for the hidden variable S. Therefore, the feature functions are dependent on S. Although the number of phrases K is implicitly given by the segmentation S, we used both S and K to make this dependency more obvious.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In statistical machine translation, we are given a source language ('French') sentence f J 1 = f 1 . . . f j . . . f J , which is to be translated into a target language ('English') sentence e I 1 = e 1 . . . e i . . . e I . Among all possible target language sentences, we will choose the sentence with the highest probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e I 1 = argmax e I 1 P r(e I 1 |f J 1 )",
"eq_num": "(1)"
}
],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "= argmax e I 1 P r(e I 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2022 P r(f J 1 |e I 1 )",
"eq_num": "(2)"
}
],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The decomposition into two knowledge sources in Equation 2 is known as the source-channel approach to statistical machine translation (Brown et al., 1990) . It allows an independent modeling of target language model P r(e I 1 ) and translation model P r(f J 1 |e I 1 ) 1 . The target language model describes the well-formedness of the target language sentence. The translation model links the source language sentence to the target language sentence. It can be further decomposed into alignment and lexicon model. The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language. We have to maximize over all possible target language sentences. An alternative to the classical source-channel approach is the direct modeling of the posterior probability P r(e I 1 |f J 1 ). Using a log-linear model , we obtain:",
"cite_spans": [
{
"start": 134,
"end": 154,
"text": "(Brown et al., 1990)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "P r(e I 1 |f J 1 ) = exp M m=1 \u03bb m h m (e I 1 , f J 1 ) \u2022 Z(f J 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here, Z(f J 1 ) denotes the appropriate normalization constant. As a decision rule, we obtain:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "e I 1 = argmax e I 1 M m=1 \u03bb m h m (e I 1 , f J 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This approach is a generalization of the source-channel approach. It has the advantage that additional models or feature functions can be easily integrated into the overall system. The model scaling factors \u03bb M 1 are trained according to the maximum entropy principle, e.g. using the GIS algorithm. Alternatively, one can train them with respect to the final translation quality measured by some error criterion (Och, 2003) .",
"cite_spans": [
{
"start": 412,
"end": 423,
"text": "(Och, 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remaining part of this work is structured as follows: in the next section, we will describe the baseline phrase-based translation model and the extraction of bilingual phrases. Then, we will describe refinements of the baseline model. In Section 4, we will describe a monotone search algorithm. Its complexity is linear in the sentence length. The next section contains the statistics of the corpora that were used. Then, we will investigate the degree of monotonicity and present the translation results for three tasks: Verbmobil, Xerox and Canadian Hansards.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One major disadvantage of single-word based approaches is that contextual information is not taken into account. The lexicon probabilities are based only on single words. For many words, the translation depends heavily on the surrounding words. In the single-word based translation approach, this disambiguation is addressed by the language model only, which is often not capable of doing this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2.1"
},
{
"text": "One way to incorporate the context into the translation model is to learn translations for whole phrases instead of single words. Here, a phrase is simply a sequence of words. So, the basic idea of phrase-based translation is to segment the given source sentence into phrases, then translate each phrase and finally compose the target sentence from these phrase translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2.1"
},
{
"text": "The system somehow has to learn which phrases are translations of each other. Therefore, we use the following approach: first, we train statistical alignment models using GIZA ++ and compute the Viterbi word alignment of the training corpus. This is done for both translation directions. We take the union of both alignments to obtain a symmetrized word alignment matrix. This alignment matrix is the starting point for the phrase extraction. The following criterion defines the set of bilingual phrases BP of the sentence pair (f J 1 ; e I 1 ) and the alignment matrix A \u2286 J \u00d7 I that is used in the translation system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Extraction",
"sec_num": "2.2"
},
{
"text": "BP(f J 1 , e I 1 , A) = f j 2 j1 , e i 2 i1 : \u2200(j, i) \u2208 A : j 1 \u2264 j \u2264 j 2 \u2194 i 1 \u2264 i \u2264 i 2 \u2227\u2203(j, i) \u2208 A : j 1 \u2264 j \u2264 j 2 \u2227 i 1 \u2264 i \u2264 i 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Extraction",
"sec_num": "2.2"
},
{
"text": "This criterion is identical to the alignment template criterion described in (Och et al., 1999) . It means that two phrases are considered to be translations of each other, if the words are aligned only within the phrase pair and not to words outside. The phrases have to be contiguous.",
"cite_spans": [
{
"start": 77,
"end": 95,
"text": "(Och et al., 1999)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Extraction",
"sec_num": "2.2"
},
{
"text": "To use phrases in the translation model, we introduce the hidden variable S. This is a segmentation of the sentence pair (f J 1 ; e I 1 ) into K phrases (f K 1 ;\u1ebd K 1 ). We use a one-toone phrase alignment, i.e. one source phrase is translated by exactly one target phrase. Thus, we obtain:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "2.3"
},
{
"text": "P r(f J 1 |e I 1 ) = S P r(f J 1 , S|e I 1 ) (3) = S P r(S|e I 1 ) \u2022 P r(f J 1 |S, e I 1 ) (4) \u2248 max S P r(S|e I 1 ) \u2022 P r(f K 1 |\u1ebd K 1 ) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "2.3"
},
{
"text": "In the preceding step, we used the maximum approximation for the sum over all segmentations. Next, we allow only translations that are monotone at the phrase level. So, the phrasef 1 is produced by\u1ebd 1 , the phrasef 2 is produced by\u1ebd 2 , and so on. Within the phrases, the reordering is learned during training. Therefore, there is no constraint on the reordering within the phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P r(f K 1 |\u1ebd K 1 ) = K k=1 P r(f k |f k\u22121 1 ,\u1ebd K 1 ) (6) = K k=1 p(f k |\u1ebd k )",
"eq_num": "(7)"
}
],
"section": "Translation Model",
"sec_num": "2.3"
},
{
"text": "Here, we have assumed a zero-order model at the phrase level. Finally, we have to estimate the phrase translation probabilities p(f |\u1ebd). This is done via relative frequencies:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(f |\u1ebd) = N (f ,\u1ebd) f N (f ,\u1ebd)",
"eq_num": "(8)"
}
],
"section": "Translation Model",
"sec_num": "2.3"
},
{
"text": "Here, N (f ,\u1ebd) denotes the count of the event thatf has been seen as a translation of\u1ebd. If one occurrence of\u1ebd has N > 1 possible translations, each of them contributes to N (f ,\u1ebd) with 1/N . These counts are calculated from the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "2.3"
},
{
"text": "Using a bigram language model and assuming Bayes decision rule, Equation (2), we obtain the following search criterion:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e I 1 = argmax e I 1 P r(e I 1 ) \u2022 P r(f J 1 |e I 1 )",
"eq_num": "(9)"
}
],
"section": "Translation Model",
"sec_num": "2.3"
},
{
"text": "= argmax",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "2.3"
},
{
"text": "e I 1 I i=1 p(e i |e i\u22121 ) (10) \u2022 max S p(S|e I 1 ) \u2022 K k=1 p(f k |\u1ebd k ) \u2248 argmax e I 1 ,S I i=1 p(e i |e i\u22121 ) K k=1 p(f k |\u1ebd k ) (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "2.3"
},
{
"text": "For the preceding equation, we assumed the segmentation probability p(S|e I 1 ) to be constant. The result is a simple translation model. If we interpret this model as a feature function in the direct approach, we obtain:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "2.3"
},
{
"text": "h phr (f J 1 , e I 1 , S, K) = log",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "2.3"
},
{
"text": "In this section, we will describe refinements of the phrase-based translation model. First, we will describe two heuristics: word penalty and phrase penalty. Second, we will describe a single-word based lexicon model. This will be used to smooth the phrase translation probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Refinements",
"sec_num": "3"
},
{
"text": "In addition to the baseline model, we use two simple heuristics, namely word penalty and phrase penalty:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simple Heuristics",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h wp (f J 1 , e I 1 , S, K) = I (12) h pp (f J 1 , e I 1 , S, K) = K",
"eq_num": "(13)"
}
],
"section": "Simple Heuristics",
"sec_num": "3.1"
},
{
"text": "The word penalty feature is simply the target sentence length. In combination with the scaling factor this results in a constant cost per produced target language word. With this feature, we are able to adjust the sentence length. If we set a negative scaling factor, longer sentences are more penalized than shorter ones, and the system will favor shorter translations. Alternatively, by using a positive scaling factors, the system will favor longer translations. Similar to the word penalty, the phrase penalty feature results in a constant cost per produced phrase. The phrase penalty is used to adjust the average length of the phrases. A negative weight, meaning real costs per phrase, results in a preference for longer phrases. A positive weight, meaning a bonus per phrase, results in a preference for shorter phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simple Heuristics",
"sec_num": "3.1"
},
{
"text": "We are using relative frequencies to estimate the phrase translation probabilities. Most of the longer phrases are seen only once in the training corpus. Therefore, pure relative frequencies overestimate the probability of those phrases. To overcome this problem, we use a word-based lexicon model to smooth the phrase translation probabilities. For a source word f and a target phrase\u1ebd = e i 2 i1 , we use the following approximation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-based Lexicon",
"sec_num": "3.2"
},
{
"text": "p(f |e i 2 i1 ) \u2248 1 \u2212 i 2 i=i 1 (1 \u2212 p(f |e i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-based Lexicon",
"sec_num": "3.2"
},
{
"text": "This models a disjunctive interaction, also called noisy-OR gate (Pearl, 1988) . The idea is that there are multiple independent causes e i2 i 1 that can generate an event f . It can be easily integrated into the search algorithm. The corresponding feature function is:",
"cite_spans": [
{
"start": 65,
"end": 78,
"text": "(Pearl, 1988)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word-based Lexicon",
"sec_num": "3.2"
},
{
"text": "h lex (f J 1 , e I 1 , S, K) = log K k=1 j k j=j k\u22121 +1 p(f j |\u1ebd k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-based Lexicon",
"sec_num": "3.2"
},
{
"text": "Here, j k and i k denote the final position of phrase number k in the source and the target sentence, respectively, and we define j 0 := 0 and i 0 := 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-based Lexicon",
"sec_num": "3.2"
},
{
"text": "To estimate the single-word based translation probabilities p(f |e), we use smoothed relative frequencies. The smoothing method we apply is absolute discounting with interpolation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-based Lexicon",
"sec_num": "3.2"
},
{
"text": "p(f |e) = max {N (f, e) \u2212 d, 0} N (e) + \u03b1(e) \u2022 \u03b2(f )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-based Lexicon",
"sec_num": "3.2"
},
{
"text": "This method is well known from language modeling (Ney et al., 1997) . Here, d is the nonnegative discounting parameter, \u03b1(e) is a normalization constant and \u03b2 is the normalized backing-off distribution. To compute the counts, we use the same word alignment matrix as for the extraction of the bilingual phrases. The symbol N (e) denotes the unigram count of a word e and N (f, e) denotes the count of the event that the target language word e is aligned to the source language word f . If one occurrence of e has N > 1 aligned source words, each of them contributes with a count of 1/N . The formula for \u03b1(e) is:",
"cite_spans": [
{
"start": 49,
"end": 67,
"text": "(Ney et al., 1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word-based Lexicon",
"sec_num": "3.2"
},
{
"text": "\u03b1(e) = 1 N (e) \uf8eb \uf8ed f :N (f,e)>d d + f :N (f,e)\u2264d N (f, e) \uf8f6 \uf8f8 = 1 N (e) f min{d, N (f, e)}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-based Lexicon",
"sec_num": "3.2"
},
{
"text": "This formula is a generalization of the one typically used in publications on language modeling. This generalization is necessary, because the lexicon counts may be fractional whereas in language modeling typically integer counts are used. Additionally, we want to allow discounting values d greater than one. One effect of the discounting parameter d is that all lexicon entries with a count less than d are discarded and the freed probability mass is redistributed among the other entries. As backing-off distribution \u03b2(f ), we consider two alternatives. The first one is a uniform distribution and the second one is the unigram distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-based Lexicon",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b2 1 (f ) = 1 V f (14) \u03b2 2 (f ) = N (f ) f N (f )",
"eq_num": "(15)"
}
],
"section": "Word-based Lexicon",
"sec_num": "3.2"
},
{
"text": "Here, V f denotes the vocabulary size of the source language and N (f ) denotes the unigram count of a source word f .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-based Lexicon",
"sec_num": "3.2"
},
{
"text": "The monotone search can be efficiently computed with dynamic programming. The resulting complexity is linear in the sentence length. We present the formulae for a bigram language model. This is only for notational convenience. The generalization to a higher order language model is straightforward. For the maximization problem in (11), we define the quantity Q(j, e) as the maximum probability of a phrase sequence that ends with the language word e and covers positions 1 to j of the source sentence. Q(J + 1, $) is the probability of the optimum translation. The $ symbol is the sentence boundary marker. We obtain the following dynamic programming recursion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotone Search",
"sec_num": "4"
},
{
"text": "Q(0, $) = 1 Q(j, e) = max e ,\u1ebd, j\u2212M \u2264j <j p(f j j +1 |\u1ebd) \u2022 p(\u1ebd|e ) \u2022 Q(j , e ) Q(J + 1, $) = max e {Q(J, e ) \u2022 p($|e )}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotone Search",
"sec_num": "4"
},
{
"text": "Here, M denotes the maximum phrase length in the source language. During the search, we store backpointers to the maximizing arguments. After performing the search, we can generate the optimum translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotone Search",
"sec_num": "4"
},
{
"text": "The resulting algorithm has a worst-case complexity of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotone Search",
"sec_num": "4"
},
{
"text": "O(J \u2022 M \u2022 V e \u2022 E).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotone Search",
"sec_num": "4"
},
{
"text": "Here, V e denotes the vocabulary size of the target language and E denotes the maximum number of phrase translation candidates for a source language phrase. Using efficient data structures and taking into account that not all possible target language phrases can occur in translating a specific source language sentence, we can perform a very efficient search. This monotone algorithm is especially useful for language pairs that have a similar word order, e.g. Spanish-English or French-English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotone Search",
"sec_num": "4"
},
{
"text": "In the following sections, we will present results on three tasks: Verbmobil, Xerox and Canadian Hansards. Therefore, we will show the corpus statistics for each of these tasks in this section. The training corpus (Train) of each task is used to train a word alignment and then extract the bilingual phrases and the word-based lexicon. The remaining free parameters, e.g. the model scaling factors, are optimized on the development corpus (Dev). The resulting system is then evaluated on the test corpus (Test).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Statistics",
"sec_num": "5"
},
{
"text": "Verbmobil Task. The first task we will present results on is the German-English Verbmobil task (Wahlster, 2000) . The domain of this corpus is appointment scheduling, travel planning, and hotel reservation. It consists of transcriptions of spontaneous speech. Table 1 shows the corpus statistics of this task.",
"cite_spans": [
{
"start": 95,
"end": 111,
"text": "(Wahlster, 2000)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 260,
"end": 267,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Corpus Statistics",
"sec_num": "5"
},
{
"text": "Xerox task. Additionally, we carried out experiments on the Spanish-English Xerox task. The corpus consists of technical manuals. This is a rather limited domain task. Table 2 shows the training, development and test corpus statistics.",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 175,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Corpus Statistics",
"sec_num": "5"
},
{
"text": "Canadian Hansards task. Further experiments were carried out on the French-English Canadian Hansards task. This task contains the proceedings of the Canadian parliament. About 3 million parallel sentences of this bilingual data have been made available by the Linguistic Data Consortium (LDC). Here, we use a subset of the data containing only sentences with a maximum length of 30 words. This task covers a large variety of topics, so this is an open-domain corpus. This is also reflected by the large vocabulary size. Table 3 shows the training and test corpus statistics.",
"cite_spans": [],
"ref_spans": [
{
"start": 520,
"end": 527,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Corpus Statistics",
"sec_num": "5"
},
{
"text": "In this section, we will investigate the effect of the monotonicity constraint. Therefore, we compute how many of the training corpus sentence pairs can be produced with the monotone phrase-based search. We compare this to the number of sentence pairs that can be produced with a nonmonotone phrase-based search. To make these numbers more realistic, we use leaving-one-out. Thus phrases that are extracted from a specific sentence pair are not used to check its monotonicity. With leaving-one-out it is possible that even the nonmonotone search cannot generate a sentence pair. This happens if a sentence pair contains a word that occurs only once in the training corpus. All phrases that might produce this singleton are excluded because of the leaving-one-out principle. Note that all these monotonicity consideration are done at the phrase level. Within the phrases arbitrary reorderings are allowed. The only restriction is that they occur in the training corpus. Table 4 shows the percentage of the training corpus that can be generated with monotone and nonmonotone phrase-based search. The number of sentence pairs that can be produced with the nonmonotone search gives an estimate of the upper bound for the sentence error rate of the phrase-based system that is trained on the given data. The same considerations hold for the monotone search. The maximum source phrase length for the Verbmobil task and the Xerox task is 12, whereas for the Canadian Hansards task we use a maximum of 4, because of the large corpus size. This explains the rather low coverage on the Canadian Hansards task for both the nonmonotone and the monotone search.",
"cite_spans": [],
"ref_spans": [
{
"start": 969,
"end": 976,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Degree of Monotonicity",
"sec_num": "6"
},
{
"text": "For the Xerox task, the nonmonotone search can produce 75.1% of the sentence pairs whereas the monotone can produce 65.3%. The ratio of the two numbers measures how much the system deteriorates by using the monotone search and will be called the degree of monotonicity. For the Xerox task, the degree of monotonicity is 87.0%. This means the monotone search can produce 87.0% of the sentence pairs that can be produced with the nonmonotone search. We see that for the Spanish-English Xerox task and for the French-English Canadian Hansards task, the degree of monotonicity is rather high. For the German-English Verbmobil task it is significantly lower. This may be caused by the rather free word order in German and the long range reorderings that are necessary to translate the verb group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Degree of Monotonicity",
"sec_num": "6"
},
{
"text": "It should be pointed out that in practice the monotone search will perform better than what the preceding estimates indicate. The reason is that we assumed a perfect nonmonotone search, which is difficult to achieve in practice. This is not only a hard search problem, but also a complicated modeling problem. We will see in the next section that the monotone search will perform very well on both the Xerox task and the Canadian Hansards task. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Degree of Monotonicity",
"sec_num": "6"
},
{
"text": "So far, in machine translation research a single generally accepted criterion for the evaluation of the experimental results does not exist. Therefore, we use a variety of different criteria.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria",
"sec_num": "7.1"
},
{
"text": "\u2022 WER (word error rate):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria",
"sec_num": "7.1"
},
{
"text": "The WER is computed as the minimum number of substitution, insertion and deletion operations that have to be performed to convert the generated sentence into the reference sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria",
"sec_num": "7.1"
},
{
"text": "\u2022 PER (position-independent word error rate): A shortcoming of the WER is that it requires a perfect word order. The word order of an acceptable sentence can be different from that of the target sentence, so that the WER measure alone could be misleading. The PER compares the words in the two sentences ignoring the word order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria",
"sec_num": "7.1"
},
{
"text": "\u2022 BLEU score: This score measures the precision of unigrams, bigrams, trigrams and fourgrams with respect to a reference translation with a penalty for too short sentences (Papineni et al., 2001 ). BLEU measures accuracy, i.e. large BLEU scores are better.",
"cite_spans": [
{
"start": 172,
"end": 194,
"text": "(Papineni et al., 2001",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria",
"sec_num": "7.1"
},
{
"text": "\u2022 NIST score: This score is similar to BLEU. It is a weighted ngram precision in combination with a penalty for too short sentences (Doddington, 2002) . NIST measures accuracy, i.e. large NIST scores are better.",
"cite_spans": [
{
"start": 132,
"end": 150,
"text": "(Doddington, 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria",
"sec_num": "7.1"
},
{
"text": "For the Verbmobil task, we have multiple references available. Therefore on this task, we compute all the preceding criteria with respect to multiple references. To indicate this, we will precede the acronyms with an m (multiple) if multiple references are used. For the other two tasks, only single references are used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria",
"sec_num": "7.1"
},
{
"text": "In this section, we will describe the systems that were used. On the one hand, we have three different variants of the single-word based model IBM4. On the other hand, we have two phrase-based systems, namely the alignment templates and the one described in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Systems",
"sec_num": "7.2"
},
{
"text": "First, there is a monotone search variant (Mon) that translates each word of the source sentence from left to right. The second variant allows reordering according to the so-called IBM constraints (Berger et al., 1996) . Thus up to three words may be skipped and translated later. This system will be denoted by IBM. The third variant implements special German-English reordering constraints. These constraints are represented by a finite state automaton and optimized to handle the reorderings of the German verb group. The abbreviation for this variant is GE. It is only used for the German-English Verbmobil task. This is just an extremely brief description of these systems. For details, see (Tillmann and Ney, 2003) .",
"cite_spans": [
{
"start": 197,
"end": 218,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF0"
},
{
"start": 696,
"end": 720,
"text": "(Tillmann and Ney, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Single-Word Based Systems (SWB).",
"sec_num": null
},
{
"text": "Phrase-Based System (PB). For the phrase-based system, we use the following feature functions: a trigram language model, the phrase translation model and the word-based lexicon model. The latter two feature functions are used for both directions: p(f |e) and p(e|f ). Additionally, we use the word and phrase penalty feature functions. The model scaling factors are optimized on the development corpus with respect to mWER similar to (Och, 2003) . We use the Downhill Simplex algorithm from (Press et al., 2002) . We do not perform the optimization on N -best lists but we retranslate the whole development corpus for each iteration of the optimization algorithm. This is feasible because this system is extremely fast. It takes only a few seconds to translate the whole development corpus for the Verbmobil task and the Xerox task; for details see Section 8. In the experiments, the Downhill Simplex algorithm converged after about 200 iterations. This method has the advantage that it is not limited to the model scaling factors as the method described in (Och, 2003) . It is also possible to optimize any other parameter, e.g. the discounting parameter for the lexicon smoothing.",
"cite_spans": [
{
"start": 434,
"end": 445,
"text": "(Och, 2003)",
"ref_id": "BIBREF8"
},
{
"start": 491,
"end": 511,
"text": "(Press et al., 2002)",
"ref_id": "BIBREF11"
},
{
"start": 1058,
"end": 1069,
"text": "(Och, 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Single-Word Based Systems (SWB).",
"sec_num": null
},
{
"text": "Alignment Template System (AT). The alignment template system (Och et al., 1999) is similar to the system described in this work. One difference is that the alignment templates are not defined at the word level but at a word class level. In addition to the word-based trigram model, the alignment template system uses a classbased fivegram language model. The search algorithm of the alignment templates allows arbitrary reorderings of the templates. It penalizes reorderings with costs that are linear in the jump width. To make the results as comparable as possible, the alignment template system and the phrase-based system start from the same word alignment. The alignment template system uses discriminative training of the model scaling factors as described in .",
"cite_spans": [
{
"start": 62,
"end": 80,
"text": "(Och et al., 1999)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Single-Word Based Systems (SWB).",
"sec_num": null
},
{
"text": "We start with the Verbmobil results. We studied smoothing the lexicon probabilities as described in Section 3.2. The results are summarized in Table 5 . We see that the There is a degradation of the mWER of 0.9%. In the following, all phrase-based systems use the uniform smoothing method. The translation results of the different systems are shown in Table 6 . Obviously, the monotone phrase-based system outperforms the monotone single-word based system. The result of the phrase-based system is comparable to the nonmonotone single-word based search with the IBM constraints. With respect to the mPER, the PB system clearly outperforms all single-word based systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 150,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 352,
"end": 359,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Verbmobil Task",
"sec_num": "7.3"
},
{
"text": "If we compare the monotone phrase-based system with the nonmonotone alignment template system, we see that the mPERs are similar. Thus the lexical choice of words is of the same quality. Regarding the other evaluation criteria, which take the word order into account, the nonmonotone search of the alignment templates has a clear advantage. This was already indicated by the low degree of monotonicity on this task. The rather free word order in German and the long range dependencies of the verb group make reorderings necessary. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verbmobil Task",
"sec_num": "7.3"
},
{
"text": "The translation results for the Xerox task are shown in Table 7 . Here, we see that both phrase-based systems clearly outperform the single-word based systems. The PB system performs best on this task. Compared to the AT system, the BLEU score improves by 4.1% absolute. The improvement of the PB system with respect to the AT system is statistically significant at the 99% level. ",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 63,
"text": "Table 7",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Xerox task",
"sec_num": "7.4"
},
{
"text": "The translation results for the Canadian Hansards task are shown in Table 8 . As on the Xerox task, the phrase-based systems perform better than the single-word based systems. The monotone phrase-based system yields even better results than the alignment template system. This improvement is consistent among all evaluation criteria and it is statistically significant at the 99% level. ",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 8",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Canadian Hansards task",
"sec_num": "7.5"
},
{
"text": "In this section, we analyze the translation speed of the phrase-based translation system. All experiments were carried out on an AMD Athlon with 2.2GHz. Note that the systems were not optimized for speed. We used the best performing systems to measure the translation times. The translation speed of the monotone phrase-based system for all three tasks is shown in Table 9. For the Xerox task, the translation process takes less than 7 seconds for the whole 10K words test set. For the Verbmobil task, the system is even slightly faster. It takes about 1.6 seconds to translate the whole test set. For the Canadian Hansards task, the translation process is much slower, but the average time per sentence is still less than 1 second. We think that this slowdown can be attributed to the large training corpus. The system loads only phrase pairs into memory if the source phrase occurs in the test corpus. Therefore, the large test corpus size for this task also affects the translation speed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficiency",
"sec_num": "8"
},
{
"text": "In Fig. 1 , we see the average translation time per sentence as a function of the sentence length. The translation times were measured for the translation of the 5432 test sentences of the Canadian Hansards task. We see a clear linear dependency. Even for sentences of thirty words, the translation takes only about 1.5 seconds. ",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 9,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Efficiency",
"sec_num": "8"
},
{
"text": "Recently, phrase-based translation approaches became more and more popular. Some examples are the alignment template system in (Och et al., 1999; that we used for comparison. In (Zens et al., 2002) , a simple phrase-based approach is described that served as starting point for the system in this work. (Marcu and Wong, 2002) presents a joint probability model for phrase-based translation. It does not use the word alignment for extracting the phrases, but directly generates a phrase alignment. In (Koehn et al., 2003) , various aspects of phrase-based systems are compared, e.g. the phrase extraction method, the underlying word alignment model, or the maximum phrase length. (Tomas and Casacuberta, 2003) describes a linear interpolation of a phrase-based and an alignment template-based approach.",
"cite_spans": [
{
"start": 127,
"end": 145,
"text": "(Och et al., 1999;",
"ref_id": "BIBREF7"
},
{
"start": 178,
"end": 197,
"text": "(Zens et al., 2002)",
"ref_id": "BIBREF15"
},
{
"start": 303,
"end": 325,
"text": "(Marcu and Wong, 2002)",
"ref_id": "BIBREF4"
},
{
"start": 500,
"end": 520,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF3"
},
{
"start": 679,
"end": 708,
"text": "(Tomas and Casacuberta, 2003)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "9"
},
{
"text": "We described a phrase-based translation approach. The basic idea of this approach is to remember all bilingual phrases that have been seen in the word-aligned training corpus. As refinements of the baseline model, we described two simple heuristics: the word penalty feature and the phrase penalty feature. Additionally, we presented a single-word based lexicon with two smoothing methods. The model scaling factors were optimized with respect to the mWER on the development corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "10"
},
{
"text": "We described a highly efficient monotone search algorithm. The worst-case complexity of this algorithm is linear in the sentence length. This leads to an impressive translation speed of more than 1000 words per second for the Verbmobil task and for the Xerox task. Even for the Canadian Hansards task the translation of sentences of length 30 takes only about 1.5 seconds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "10"
},
{
"text": "The described search is monotone at the phrase level. Within the phrases, there are no constraints on the reorderings. Therefore, this method is best suited for language pairs that have a similar order at the level of the phrases learned by the system. Thus, the translation process should require only local reorderings. As the experiments have shown, Spanish-English and French-English are examples of such language pairs. For these pairs, the monotone search was found to be sufficient. The phrase-based approach clearly outperformed the singleword based systems. It showed even better performance than the alignment template system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "10"
},
{
"text": "The experiments on the German-English Verbmobil task outlined the limitations of the monotone search. As the low degree of monotonicity indicated, reordering plays an important role on this task. The rather free word order in German as well as the verb group seems to be difficult to translate. Nevertheless, when ignoring the word order and looking at the mPER only, the monotone search is competitive with the best performing system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "10"
},
{
"text": "For further improvements, we will investigate the usefulness of additional models, e.g. modeling the segmentation probability. Also, slightly relaxing the monotonicity constraint in a way that still allows an efficient search is of high interest. In spirit of the IBM reordering constraints of the single-word based models, we could allow a phrase to be skipped and to be translated later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "10"
},
{
"text": "The notational convention will be as follows: we use the symbol P r(\u2022) to denote general probability distributions with (nearly) no specific assumptions. In contrast, for model-based probability distributions, we use the generic symbol p(\u2022).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been partially funded by the EU project TransType 2, IST-2001-32091. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Language translation apparatus and method of using context-based translation models, United States patent",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Gillett",
"suffix": ""
},
{
"first": "A",
"middle": [
"S"
],
"last": "Kehler",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. L. Berger, P. F. Brown, S. A. D. Pietra, V. J. D. Pietra, J. R. Gillett, A. S. Kehler, and R. L. Mercer. 1996. Language translation apparatus and method of using context-based translation models, United States patent, patent number 5510981, April.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A statistical approach to machine translation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cocke",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "P",
"middle": [
"S"
],
"last": "Roossin",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "2",
"pages": "79--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, J. Cocke, S. A. Della Pietra, V. J. Della Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79-85, June.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics",
"authors": [
{
"first": "G",
"middle": [],
"last": "Doddington",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ARPA Workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statis- tics. In Proc. ARPA Workshop on Human Language Technology.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the Human Language Technology Conf. (HLT-NAACL)",
"volume": "",
"issue": "",
"pages": "127--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proc. of the Human Lan- guage Technology Conf. (HLT-NAACL), pages 127- 133, Edmonton, Canada, May/June.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A phrase-based, joint probability model for statistical machine translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. Conf. on Empirical Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "133--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marcu and W. Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In Proc. Conf. on Empirical Methods for Natural Lan- guage Processing, pages 133-139, Philadelphia, PA, July.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Statistical language modeling using leaving-one-out",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Wessel",
"suffix": ""
}
],
"year": 1997,
"venue": "Corpus-Based Methods in Language and Speech Processing",
"volume": "",
"issue": "",
"pages": "174--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Ney, S. Martin, and F. Wessel. 1997. Statistical lan- guage modeling using leaving-one-out. In S. Young and G. Bloothooft, editors, Corpus-Based Methods in Language and Speech Processing, pages 174-207. Kluwer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "295--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och and H. Ney. 2002. Discriminative training and maximum entropy models for statistical machine trans- lation. In Proc. of the 40th Annual Meeting of the As- sociation for Computational Linguistics (ACL), pages 295-302, Philadelphia, PA, July.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improved alignment models for statistical machine translation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of the Joint SIGDAT Conf. on Empirical Methods in Natural Language Processing and Very Large Corpora",
"volume": "",
"issue": "",
"pages": "20--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och, C. Tillmann, and H. Ney. 1999. Improved alignment models for statistical machine translation. In Proc. of the Joint SIGDAT Conf. on Empirical Meth- ods in Natural Language Processing and Very Large Corpora, pages 20-28, University of Maryland, Col- lege Park, MD, June.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the 41th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och. 2003. Minimum error rate training in statis- tical machine translation. In Proc. of the 41th Annual Meeting of the Association for Computational Linguis- tics (ACL), pages 160-167, Sapporo, Japan, July.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [
"A"
],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W",
"middle": [
"J"
],
"last": "Zhu",
"suffix": ""
}
],
"year": 2001,
"venue": "IBM Research Division",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. A. Papineni, S. Roukos, T. Ward, and W. J. Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. Technical Report RC22176 (W0109-022), IBM Research Division, Thomas J. Watson Research Center, September.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pearl",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Pearl. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, Inc., San Mateo, CA. Revised second printing.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Numerical Recipes in C++",
"authors": [
{
"first": "W",
"middle": [
"H"
],
"last": "Press",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Teukolsky",
"suffix": ""
},
{
"first": "W",
"middle": [
"T"
],
"last": "Vetterling",
"suffix": ""
},
{
"first": "B",
"middle": [
"P"
],
"last": "Flannery",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. 2002. Numerical Recipes in C++. Cam- bridge University Press, Cambridge, UK.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Word reordering and a dynamic programming beam search algorithm for statistical machine translation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "97--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Tillmann and H. Ney. 2003. Word reordering and a dynamic programming beam search algorithm for sta- tistical machine translation. Computational Linguis- tics, 29(1):97-133, March.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Combining phrasebased and template-based aligned models in statistical translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Tomas",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the First Iberian Conf. on Pattern Recognition and Image Analysis",
"volume": "",
"issue": "",
"pages": "1020--1031",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Tomas and F. Casacuberta. 2003. Combining phrase- based and template-based aligned models in statisti- cal translation. In Proc. of the First Iberian Conf. on Pattern Recognition and Image Analysis, pages 1020- 1031, Mallorca, Spain, June.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Verbmobil: Foundations of speech-to-speech translations",
"authors": [],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Wahlster, editor. 2000. Verbmobil: Foundations of speech-to-speech translations. Springer Verlag, Berlin, Germany, July.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Phrase-based statistical machine translation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "25th German Conference on Artificial Intelligence (KI2002)",
"volume": "",
"issue": "",
"pages": "18--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Zens, F. J. Och, and H. Ney. 2002. Phrase-based sta- tistical machine translation. In 25th German Confer- ence on Artificial Intelligence (KI2002), pages 18-32, Aachen, Germany, September. Springer Verlag.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Average translation time per sentence as a function of the sentence length for the Canadian Hansards task (5432 test sentences).",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">German English</td></tr><tr><td colspan=\"2\">Train Sentences</td><td>58 073</td></tr><tr><td/><td>Words</td><td colspan=\"2\">519 523 549 921</td></tr><tr><td/><td>Vocabulary</td><td>7 939</td><td>4 672</td></tr><tr><td>Dev</td><td>Sentences</td><td>276</td></tr><tr><td/><td>Words</td><td>3 159</td><td>3 438</td></tr><tr><td/><td>Trigram PP</td><td>-</td><td>28.1</td></tr><tr><td>Test</td><td>Sentences</td><td>251</td></tr><tr><td/><td>Words</td><td>2 628</td><td>2 871</td></tr><tr><td/><td>Trigram PP</td><td>-</td><td>30.5</td></tr></table>",
"type_str": "table",
"text": "Statistics of training and test corpus for the Verbmobil task (PP=perplexity).",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">Spanish English</td></tr><tr><td colspan=\"2\">Train Sentences</td><td colspan=\"2\">55 761</td></tr><tr><td/><td>Words</td><td colspan=\"2\">752 606 665 399</td></tr><tr><td/><td>Vocabulary</td><td>11 050</td><td>7 956</td></tr><tr><td>Dev</td><td>Sentences</td><td>1012</td></tr><tr><td/><td>Words</td><td>15 957</td><td>14 278</td></tr><tr><td/><td>Trigram PP</td><td>-</td><td>28.1</td></tr><tr><td>Test</td><td>Sentences</td><td>1125</td></tr><tr><td/><td>Words</td><td>10 106</td><td>8 370</td></tr><tr><td/><td>Trigram PP</td><td>-</td><td>48.3</td></tr></table>",
"type_str": "table",
"text": "Statistics of training and test corpus for the Xerox task (PP=perplexity).",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">French English</td></tr><tr><td colspan=\"2\">Train Sentences</td><td colspan=\"2\">1.5M</td></tr><tr><td/><td>Words</td><td>24M</td><td>22M</td></tr><tr><td/><td colspan=\"2\">Vocabulary 100 269</td><td>78 332</td></tr><tr><td>Dev</td><td>Sentences</td><td>500</td></tr><tr><td/><td>Words</td><td>9 043</td><td>8 195</td></tr><tr><td/><td>Trigram PP</td><td>-</td><td>57.7</td></tr><tr><td>Test</td><td>Sentences</td><td>5432</td></tr><tr><td/><td>Words</td><td>97 646</td><td>88 773</td></tr><tr><td/><td>Trigram PP</td><td>-</td><td>56.7</td></tr></table>",
"type_str": "table",
"text": "Statistics of training and test corpus for the Canadian Hansards task (PP=perplexity).",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">Verbmobil Xerox Hansards</td></tr><tr><td>nonmonotone</td><td>76.3</td><td>75.1</td><td>59.7</td></tr><tr><td>monotone</td><td>55.4</td><td>65.3</td><td>51.5</td></tr><tr><td>deg. of mon.</td><td>72.6</td><td>87.0</td><td>86.3</td></tr><tr><td colspan=\"2\">7 Translation Results</td><td/><td/></tr></table>",
"type_str": "table",
"text": "Degree of monotonicity in the training corpora for all three tasks (numbers in percent).",
"num": null
},
"TABREF4": {
"html": null,
"content": "<table><tr><td>system</td><td colspan=\"4\">mWER mPER BLEU NIST</td></tr><tr><td>unsmoothed</td><td>37.3</td><td>21.1</td><td>46.6</td><td>7.96</td></tr><tr><td>uniform</td><td>37.0</td><td>20.7</td><td>47.0</td><td>7.99</td></tr><tr><td>unigram</td><td>38.2</td><td>22.3</td><td>45.5</td><td>7.79</td></tr><tr><td colspan=\"5\">uniform smoothing method improves translation quality.</td></tr><tr><td colspan=\"5\">There is only a minor improvement, but it is consistent</td></tr><tr><td colspan=\"5\">among all evaluation criteria. It is statistically signifi-</td></tr><tr><td colspan=\"5\">cant at the 94% level. The unigram method hurts perfor-</td></tr><tr><td>mance.</td><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"text": "Effect of lexicon smoothing on the translation performance [%] for the German-English Verbmobil task.",
"num": null
},
"TABREF5": {
"html": null,
"content": "<table><tr><td colspan=\"6\">system variant mWER mPER BLEU NIST</td></tr><tr><td>SWB</td><td>Mon</td><td>42.8</td><td>29.3</td><td>38.0</td><td>7.07</td></tr><tr><td/><td>IBM</td><td>37.1</td><td>25.0</td><td>47.8</td><td>7.84</td></tr><tr><td/><td>GE</td><td>35.4</td><td>25.3</td><td>48.5</td><td>7.83</td></tr><tr><td>PB</td><td/><td>37.0</td><td>20.7</td><td>47.0</td><td>7.99</td></tr><tr><td>AT</td><td/><td>30.3</td><td>20.6</td><td>56.8</td><td>8.57</td></tr></table>",
"type_str": "table",
"text": "Translation performance [%] for the German-English Verbmobil task (251 sentences).",
"num": null
},
"TABREF6": {
"html": null,
"content": "<table><tr><td>System</td><td colspan=\"3\">WER PER BLEU NIST</td></tr><tr><td>SWB IBM</td><td>38.8 27.6</td><td>55.3</td><td>8.00</td></tr><tr><td>PB</td><td>26.5 18.1</td><td>67.9</td><td>9.07</td></tr><tr><td>AT</td><td>28.9 20.1</td><td>63.8</td><td>8.76</td></tr></table>",
"type_str": "table",
"text": "Translation performance [%] for the Spanish-English Xerox task (1125 sentences).",
"num": null
},
"TABREF7": {
"html": null,
"content": "<table><tr><td colspan=\"5\">: Translation performance [%] for the French-</td></tr><tr><td colspan=\"5\">English Canadian Hansards task (5432 sentences).</td></tr><tr><td colspan=\"5\">System Variant WER PER BLEU NIST</td></tr><tr><td>SWB</td><td>Mon</td><td>65.2 53.0</td><td>19.8</td><td>5.96</td></tr><tr><td/><td>IBM</td><td>64.5 51.3</td><td>20.7</td><td>6.21</td></tr><tr><td>PB</td><td/><td>57.8 46.6</td><td>27.8</td><td>7.15</td></tr><tr><td>AT</td><td/><td>61.1 49.1</td><td>26.0</td><td>6.71</td></tr></table>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF8": {
"html": null,
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"4\">Verbmobil Xerox Hansards</td></tr><tr><td colspan=\"3\">avg. sentence length</td><td/><td>10.5</td><td>13.5</td><td/><td>18.0</td></tr><tr><td colspan=\"3\">seconds / sentence</td><td/><td colspan=\"2\">0.006 0.007</td><td/><td>0.794</td></tr><tr><td colspan=\"3\">words / second</td><td/><td>1642</td><td>1448</td><td/><td>22.8</td></tr><tr><td/><td>1.6</td><td/><td/><td/><td/><td/></tr><tr><td/><td>1.4</td><td/><td/><td/><td/><td/></tr><tr><td/><td>1.2</td><td/><td/><td/><td/><td/></tr><tr><td/><td>1</td><td/><td/><td/><td/><td/></tr><tr><td>time</td><td>0.8</td><td/><td/><td/><td/><td/></tr><tr><td/><td>0.6</td><td/><td/><td/><td/><td/></tr><tr><td/><td>0.4</td><td/><td/><td/><td/><td/></tr><tr><td/><td>0.2</td><td/><td/><td/><td/><td/></tr><tr><td/><td>0</td><td/><td/><td/><td/><td/></tr><tr><td/><td>0</td><td>5</td><td>10</td><td>15</td><td>20</td><td>25</td><td>30</td></tr><tr><td/><td/><td/><td/><td>sentence length</td><td/><td/></tr></table>",
"type_str": "table",
"text": "Translation Speed for all tasks on a AMD Athlon 2.2GHz.",
"num": null
}
}
}
}