Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C04-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:18:35.575367Z"
},
"title": "Symmetric Word Alignments for Statistical Machine Translation",
"authors": [
{
"first": "Evgeny",
"middle": [],
"last": "Matusov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "RWTH Aachen University",
"location": {
"postCode": "D-52056",
"settlement": "Aachen",
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "RWTH Aachen University",
"location": {
"postCode": "D-52056",
"settlement": "Aachen",
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "RWTH Aachen University",
"location": {
"postCode": "D-52056",
"settlement": "Aachen",
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we address the word alignment problem for statistical machine translation. We aim at creating a symmetric word alignment allowing for reliable one-to-many and many-to-one word relationships. We perform the iterative alignment training in the source-to-target and the target-to-source direction with the well-known IBM and HMM alignment models. Using these models, we robustly estimate the local costs of aligning a source word and a target word in each sentence pair. Then, we use efficient graph algorithms to determine the symmetric alignment with minimal total costs (i. e. maximal alignment probability). We evaluate the automatic alignments created in this way on the German-English Verbmobil task and the French-English Canadian Hansards task. We show statistically significant improvements of the alignment quality compared to the best results reported so far. On the Verbmobil task, we achieve an improvement of more than 1% absolute over the baseline error rate of 4.7%.",
"pdf_parse": {
"paper_id": "C04-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we address the word alignment problem for statistical machine translation. We aim at creating a symmetric word alignment allowing for reliable one-to-many and many-to-one word relationships. We perform the iterative alignment training in the source-to-target and the target-to-source direction with the well-known IBM and HMM alignment models. Using these models, we robustly estimate the local costs of aligning a source word and a target word in each sentence pair. Then, we use efficient graph algorithms to determine the symmetric alignment with minimal total costs (i. e. maximal alignment probability). We evaluate the automatic alignments created in this way on the German-English Verbmobil task and the French-English Canadian Hansards task. We show statistically significant improvements of the alignment quality compared to the best results reported so far. On the Verbmobil task, we achieve an improvement of more than 1% absolute over the baseline error rate of 4.7%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word-aligned bilingual corpora provide important knowledge for many natural language processing tasks, such as the extraction of bilingual word or phrase lexica (Melamed, 2000; Och and Ney, 2000) . The solutions of these problems depend heavily on the quality of the word alignment (Och and Ney, 2000) . Word alignment models were first introduced in statistical machine translation (Brown et al., 1993) . An alignment describes a mapping from source sentence words to target sentence words.",
"cite_spans": [
{
"start": 161,
"end": 176,
"text": "(Melamed, 2000;",
"ref_id": "BIBREF5"
},
{
"start": 177,
"end": 195,
"text": "Och and Ney, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 282,
"end": 301,
"text": "(Och and Ney, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 383,
"end": 403,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using the IBM translation models IBM-1 to IBM-5 (Brown et al., 1993) , as well as the Hidden-Markov alignment model (Vogel et al., 1996) , we can produce alignments of good quality. However, all these models constrain the alignments so that a source word can be aligned to at most one target word. This constraint is useful to reduce the computational complexity of the model training, but makes it hard to align phrases in the target language (English) such as 'the day after tomorrow' to one word in the source language (German) '\u00fcbermorgen'. We will present a word alignment algorithm which avoids this constraint and produces symmetric word alignments. This algorithm considers the alignment problem as a task of finding the edge cover with minimal costs in a bipartite graph. The parameters of the IBM models and HMM, in particular the state occupation probabilities, will be used to determine the costs of aligning a specific source word to a target word.",
"cite_spans": [
{
"start": 48,
"end": 68,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
},
{
"start": 116,
"end": 136,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We will evaluate the suggested alignment methods on the German-English Verbmobil task and the French-English Canadian Hansards task. We will show statistically significant improvements compared to state-ofthe-art results in (Och and Ney, 2003) .",
"cite_spans": [
{
"start": 224,
"end": 243,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we will give an overview of the commonly used statistical word alignment techniques. They are based on the sourcechannel approach to statistical machine translation (Brown et al., 1993) . We are given a source language sentence f J 1 := f 1 ...f j ...f J which has to be translated into a target language sentence e I 1 := e 1 ...e i ...e I . Among all possible target language sentences, we will choose the sentence with the highest probability:",
"cite_spans": [
{
"start": 182,
"end": 202,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Word Alignment Models",
"sec_num": "2"
},
{
"text": "e I 1 = argmax e I 1 P r(e I 1 |f J 1 ) = argmax e I 1 P r(e I 1 ) \u2022 P r(f J 1 |e I 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Word Alignment Models",
"sec_num": "2"
},
{
"text": "This decomposition into two knowledge sources allows for an independent modeling of target language model P r(e I 1 ) and translation model P r(f J 1 |e I 1 ). Into the translation model, the word alignment A is introduced as a hidden variable:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Word Alignment Models",
"sec_num": "2"
},
{
"text": "P r(f J 1 |e I 1 ) = A P r(f J 1 , A|e I 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Word Alignment Models",
"sec_num": "2"
},
{
"text": "Usually, the alignment is restricted in the sense that each source word is aligned to at most one target word, i.e. A = a J 1 . The alignment may contain the connection a j = 0 with the 'empty' word e 0 to account for source sentence words that are not aligned to any target word at all. A detailed description of the popular translation/alignment models IBM-1 to IBM-5 (Brown et al., 1993) , as well as the Hidden-Markov alignment model (HMM) (Vogel et al., 1996) can be found in (Och and Ney, 2003) . Model 6 is a loglinear combination of the IBM-4, IBM-1, and the HMM alignment models.",
"cite_spans": [
{
"start": 370,
"end": 390,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
},
{
"start": 444,
"end": 464,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF8"
},
{
"start": 481,
"end": 500,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Word Alignment Models",
"sec_num": "2"
},
{
"text": "A Viterbi alignment\u00c2 of a specific model is an alignment for which the following equation holds:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Word Alignment Models",
"sec_num": "2"
},
{
"text": "\u00c2 = argmax A P r(f J 1 , A|e I 1 ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Word Alignment Models",
"sec_num": "2"
},
{
"text": "The training of all alignment models is done using the EM-algorithm. In the E-step, the counts for each sentence pair (f J 1 , e I 1 ) are calculated. Here, we present this calculation on the example of the HMM. For its lexicon parameters, the marginal probability of a target word e i to occur at the target sentence position i as the translation of the source word f j at the source sentence position j is estimated with the following sum:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State Occupation Probabilities",
"sec_num": "3"
},
{
"text": "p j (i, f J 1 |e I 1 ) = a J 1 :a j =i P r(f J 1 , a J 1 |e I 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State Occupation Probabilities",
"sec_num": "3"
},
{
"text": "This value represents the likelihood of aligning f j to e i via every possible alignment A = a J 1 that includes the alignment connection a j = i. By normalizing over the target sentence positions, we arrive at the state occupation probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State Occupation Probabilities",
"sec_num": "3"
},
{
"text": "p j (i|f J 1 , e I 1 ) = p j (i, f J 1 |e I 1 ) I i =1 p j (i , f J 1 |e I 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State Occupation Probabilities",
"sec_num": "3"
},
{
"text": "In the M-step of the EM training, the state occupation probabilities are aggregated for all words in the source and target vocabularies by taking the sum over all training sentence pairs. After proper renormalization the lexicon probabilities p(f |e) are determined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State Occupation Probabilities",
"sec_num": "3"
},
{
"text": "Similarly, the training can be performed in the inverse (target-to-source) direction, yielding the state occupation probabilities p i (j|e I 1 , f J 1 ). The negated logarithms of the state occupation probabilities",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State Occupation Probabilities",
"sec_num": "3"
},
{
"text": "w(i, j; f J 1 , e I 1 ) := \u2212 log p j (i|f J 1 , e I 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State Occupation Probabilities",
"sec_num": "3"
},
{
"text": "(1) can be viewed as costs of aligning the source word f j with the target word e i . Thus, the word alignment task can be formulated as the task of finding a mapping between the source and the target words, so that each source and each target position is covered and the total costs of the alignment are minimal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State Occupation Probabilities",
"sec_num": "3"
},
{
"text": "Using state occupation probabilities for word alignment modeling results in a number of advantages. First of all, in calculation of these probabilities with the models IBM-1, IBM-2 and HMM the EM-algorithm is performed exact, i.e. the summation over all alignments is efficiently performed in the Estep. For the HMM this is done using the Baum-Welch algorithm (Baum, 1972) . So far, an efficient algorithm to compute the sum over all alignments in the fertility models IBM-3 to IBM-5 is not known. Therefore, this sum is approximated using a subset of promising alignments (Och and Ney, 2000) . In both cases, the resulting estimates are more precise than the ones obtained by the maximum approximation, i. e. by considering only the Viterbi alignment.",
"cite_spans": [
{
"start": 360,
"end": 372,
"text": "(Baum, 1972)",
"ref_id": "BIBREF0"
},
{
"start": 573,
"end": 592,
"text": "(Och and Ney, 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State Occupation Probabilities",
"sec_num": "3"
},
{
"text": "Instead of using the state occupation probabilities from only one training direction as costs (Equation 1), we can interpolate the state occupation probabilities from the sourceto-target and the target-to-source training for each pair (i,j) of positions in a sentence pair (f J 1 , e I 1 ). This will improve the estimation of the local alignment costs. Having such symmetrized costs, we can employ the graph alignment algorithms (cf. Section 4) to produce reliable alignment connections which include many-to-one and one-to-many alignment relationships. The presence of both relationship types characterizes a symmetric alignment that can potentially improve the translation results (Figure 1 shows an example of a symmetric alignment).",
"cite_spans": [],
"ref_spans": [
{
"start": 684,
"end": 693,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "State Occupation Probabilities",
"sec_num": "3"
},
{
"text": "Another important advantage is the efficiency of the graph algorithms used to deter- Figure 1 : Example of a symmetric alignment with one-to-many and many-to-one connections (Verbmobil task, spontaneous speech). mine the final symmetric alignment. They will be discussed in Section 4.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 93,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "State Occupation Probabilities",
"sec_num": "3"
},
{
"text": "In this section, we describe the alignment extraction algorithms. We assume that for each sentence pair (f J",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "1 , e I 1 ) we are given a cost matrix C. 1 The elements of this matrix c ij are the local costs that result from aligning source word f j to target word e i . For a given alignment A \u2286 I \u00d7 J, we define the costs of this alignment c(A) as the sum of the local costs of all aligned word pairs:",
"cite_spans": [
{
"start": 42,
"end": 43,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "c(A) = (i,j)\u2208A c ij",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "(2) Now, our task is to find the alignment with the minimum costs. Obviously, the empty alignment has always costs of zero and would be optimal. To avoid this, we introduce additional constraints. The first constraint is source sentence coverage. Thus each source word has to be aligned to at least one target word or alternatively to the empty word. The second constraint is target sentence coverage. Similar to the source sentence coverage thus each target word is aligned to at least one source word or the empty word. Enforcing only the source sentence coverage, the minimum cost alignment is a mapping from source positions j to target positions a j , including zero for the empty word. Each target position a j can be computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "a j = argmin i {c ij }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "This means, in each column we choose the row with the minimum costs. This method resembles the common IBM models in the sense 1 For notational convenience, we omit the dependency on the sentence pair (f J 1 , e I 1 ) in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "that the IBM models are also a mapping from source positions to target positions. Therefore, this method is comparable to the IBM models for the source-to-target direction. Similarly, if we enforce only the target sentence coverage, the minimum cost alignment is a mapping from target positions i to source positions b i . Here, we have to choose in each row the column with the minimum costs. The complexity of these algorithms is in O(I \u2022 J).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "The algorithms for determining such a nonsymmetric alignment are rather simple. A more interesting case arises, if we enforce both constraints, i.e. each source word as well as each target word has to be aligned at least once. Even in this case, we can find the global optimum in polynomial time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "The task is to find a symmetric alignment A, for which the costs c(A) are minimal (Equation 2). This task is equivalent to finding a minimum-weight edge cover (MWEC) in a complete bipartite graph 2 . The two node sets of this bipartite graph correspond to the source sentence positions and the target sentence positions, respectively. The costs of an edge are the elements of the cost matrix C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "To solve the minimum-weight edge cover problem, we reduce it to the maximum-weight bipartite matching problem. As described in (Keijsper and Pendavingh, 1998) , this reduction is linear in the graph size. For the maximum-weight bipartite matching problem, well-known algorithm exist, e.g. the Hungarian method. The complexity of this algorithm is in O((I + J) \u2022 I \u2022 J). We will call the solution of the minimum-weight edge cover problem with the Hungarian method \"the MWEC algorithm\". In contrary, we will refer to the algorithm enforcing either source sentence coverage or target sentence coverage as the onesided minimum-weight edge cover algorithm (o-MWEC).",
"cite_spans": [
{
"start": 127,
"end": 158,
"text": "(Keijsper and Pendavingh, 1998)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "The cost matrix of a sentence pair (f J 1 , e I 1 ) can be computed as a weighted linear interpolation of various cost types h m :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "c ij = M m=1 \u03bb m \u2022 h m (i, j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "In our experiments, we will use the negated logarithm of the state occupation probabilities as described in Section 3. To obtain a more symmetric estimate of the costs, we will interpolate both the source-to-target direction and the target-to-source direction (thus the state occupation probabilities are interpolated loglinearly). Because the alignments determined in the source-to-target training may substantially differ in quality from those produced in the target-to-source training, we will use an interpolation weight \u03b1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "c ij = \u03b1 \u2022 w(i, j; f J 1 , e I 1 ) + (1 \u2212 \u03b1) \u2022 w(j, i; e I 1 , f J 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "(3) Additional feature functions can be included to compute c ij ; for example, one could make use of a bilingual word or phrase dictionary. To apply the methods described in this section, we made two assumptions: first, the costs of an alignment can be computed as the sum of local costs. Second, the features have to be static in the sense that we have to fix the costs before aligning any word. Therefore, we cannot apply dynamic features such as the IBM-4 distortion model in a straightforward way. One way to overcome these restrictions lies in using the state occupation probabilities; e.g. for IBM-4, they contain the distortion model to some extent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Algorithms",
"sec_num": "4"
},
{
"text": "We use the same evaluation criterion as described in (Och and Ney, 2000) . We compare the generated word alignment to a reference alignment produced by human experts. The annotation scheme explicitly takes the ambiguity of the word alignment into account. There are two different kinds of alignments: sure alignments (S) which are used for unambiguous alignments and possible alignments (P ) which are used for alignments that might or might not exist. The P relation is used especially to align words within idiomatic expressions and free translations. It is guaranteed that the sure alignments are a subset of the possible alignments (S \u2286 P ). The obtained reference alignment may contain manyto-one and one-to-many relationships.",
"cite_spans": [
{
"start": 53,
"end": 72,
"text": "(Och and Ney, 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criterion",
"sec_num": "5.1"
},
{
"text": "The quality of an alignment A is computed as appropriately redefined precision and recall measures. Additionally, we use the alignment error rate (AER), which is derived from the well-known F-measure. With these definitions a recall error can only occur if a S(ure) alignment is not found and a precision error can only occur if a found alignment is not even P (ossible).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criterion",
"sec_num": "5.1"
},
{
"text": "We evaluated the presented lexicon symmetrization methods on the Verbmobil and the Canadian Hansards task. The German-English Verbmobil task (Wahlster, 2000) is a speech translation task in the domain of appointment scheduling, travel planning and hotel reservation. The French-English Canadian Hansards task consists of the debates in the Canadian Parliament. The corpus statistics are shown in Table 1 and Table 2 . The number of running words and the vocabularies are based on full-form words including punctuation marks. As in (Och and Ney, 2003) , the first 100 sentences of the test corpus are used as a development corpus to optimize model parameters that are not trained via the EM algorithm, e.g. the interpolation weights. The remaining part of the test corpus is used to evaluate the models.",
"cite_spans": [
{
"start": 532,
"end": 551,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 396,
"end": 416,
"text": "Table 1 and Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.2"
},
{
"text": "We use the same training schemes (model sequences) as presented in (Och and Ney, 2003) : 1 5 H 5 3 3 4 3 6 3 for the Verbmobil Task , i.e. 5 iteration of IBM-1, 5 iterations of the HMM, 3 iteration of IBM-3, etc.; for the Canadian Hansards task, we use 1 5 H 10 3 3 4 3 6 3 . We refer to these schemes as the Model 6 schemes. For comparison, we also perform less sophisticated trainings, to which we refer as the HMM schemes (1 5 H 10 and 1 5 H 5 , respectively), as well as the IBM Model 4 schemes (1 5 H 10 3 3 4 3 and 1 5 H 5 3 3 4 3 ).",
"cite_spans": [
{
"start": 67,
"end": 86,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.2"
},
{
"text": "In all training schemes we use a conventional dictionary (possibly containing phrases) as additional training material. Because we use the same training and testing conditions as (Och and Ney, 2003) , we will refer to the results presented in that article as the baseline results.",
"cite_spans": [
{
"start": 179,
"end": 198,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.2"
},
{
"text": "In the first experiments, we use the state occupation probabilities from only one translation direction to determine the word alignment. This allows for a fair comparison with the Viterbi alignment computed as the result of the training procedure. In the source-totarget translation direction, we cannot estimate the probability for the target words with fertility zero and choose to set it to 0. In this case, the minimum weight edge cover problem is solved by the one-sided MWEC algorithm. Like the Viterbi alignments, the alignments produced by this algorithm satisfy the constraint that multiple source (target) words can only be aligned to one target (source) word. Tables 3 and 4 show the performance of the one-sided MWEC algorithm in comparison with the experiment reported by (Och and Ney, 2003) . We report not only the final alignment error rates, but also the intermediate results for the HMM and IBM-4 training schemes.",
"cite_spans": [
{
"start": 785,
"end": 804,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 671,
"end": 685,
"text": "Tables 3 and 4",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Non-symmetric Alignments",
"sec_num": "5.3"
},
{
"text": "For IBM-3 to IBM-5, the Viterbi alignment and a set of promising alignments are used to determine the state occupation probabilities. Consequently, we observe similar alignment quality when comparing the Viterbi and the one-sided MWEC alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-symmetric Alignments",
"sec_num": "5.3"
},
{
"text": "We also evaluated the alignment quality after applying alignment generalization methods, i.e. we combine the alignment of both translation directions. Experimentally, the best generalization heuristic for the Canadian Hansards task is the intersection of the sourceto-target and the target-to-source alignments. For the Verbmobil task, the refined method of (Och and Ney, 2003) is used. Again, we observed similar alignment error rates when merging either the Viterbi alignments or the o-MWEC alignments. ",
"cite_spans": [
{
"start": 358,
"end": 377,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Non-symmetric Alignments",
"sec_num": "5.3"
},
{
"text": "The heuristically generalized Viterbi alignments presented in the previous section can potentially avoid the alignment constraints 3 . However, the choice of the optimal generalization heuristic may depend on a particular language pair and may require extensive manual optimization. In contrast, the symmetric MWEC algorithm is a systematic and theoretically well-founded approach to the task of producing a symmetric alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symmetric Alignments",
"sec_num": "5.4"
},
{
"text": "In the experiments with the symmetric MWEC algorithm, the optimal interpolation parameter \u03b1 (see Equation 3) for the Verbmobil corpus was empirically determined as 0.8. This shows that the model parameters can be estimated more reliably in the direction from German to English. In the inverse Englishto-German alignment training, the mappings of many English words to one German word are not allowed by the modeling constraints, although such alignment mappings are significantly more frequent than mappings of many German words to one English word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symmetric Alignments",
"sec_num": "5.4"
},
{
"text": "The experimentally best interpolation parameter for the Canadian Hansards corpus was \u03b1 = 0.5. Thus the model parameters estimated in the translation direction from French to English are as reliable as the ones estimated in the direction from English to French.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symmetric Alignments",
"sec_num": "5.4"
},
{
"text": "Lines 2a and 2b of Table 5 show the performance of the MWEC algorithm. The alignment error rates are slightly lower if the HMM or the full Model 6 training scheme is used to train the state occupation probabilities on the Canadian Hansards task. On the Verbmobil task, the improvement is more significant, yielding an alignment error rate of 4.1%.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Symmetric Alignments",
"sec_num": "5.4"
},
{
"text": "Columns 4 and 5 of Table 5 contain the results of the experiments, in which the costs c ij were determined as the loglinear interpolation of state occupation probabilities obtained from the HMM training scheme with those from IBM-4 (column 4) or from Model 6 (column 5). We set the interpolation parameters for the two translation directions proportional to the optimal values determined in the previous experiments. On the Verbmobil task, we obtain a further improvement of 19% relative over the baseline result reported in (Och and Ney, 2003) , reaching an AER as low as 3.8%.",
"cite_spans": [
{
"start": 525,
"end": 544,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Symmetric Alignments",
"sec_num": "5.4"
},
{
"text": "The improvements of the alignment quality on the Canadian Hansards task are less significant. The manual reference alignments for this task contain many possible connections and only a few sure connections (cf. Table 2). Thus automatic alignments consisting of only a few reliable alignment points are favored. Because the differences in the number of words and word order between French and English are not as dramatic as e.g. between German and English, the probability of the empty word alignment is not very high. Therefore, plenty of alignment points are produced by the MWEC algorithm, resulting in a high recall and low precision. To increase the precision, we replaced the empty word connection costs (previously trained as state occupation probabiliities using the EM algorithm) by the global, word-and position-independent costs depending only on one of the involved languages. The alignment error rates for these experiments are given in lines 3a and 3b of Table 5. The global empty word probability for the Canadian Hansards task was empirically set to 0.45 for French and for English, and, for the Verbmobil task, to 0.6 for German and 0.1 for English. On the Canadian Hansards task, we achieved further significant reduction of the AER. In particular, we reached an AER of 6.6% by performing only the HMM training. In this case the effectiveness of the MWEC algorithm is combined with the efficiency of the HMM training, resulting in a fast and robust alignment training procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symmetric Alignments",
"sec_num": "5.4"
},
{
"text": "We also tested the more simple one-sided MWEC algorithm. In contrast to the experiments presented in Section 5.3, we used the loglinear interpolated state occupation probabilities (given by the Equation 3) as costs. Thus, although the algorithm is not able to produce a symmetric alignment, it operates with symmetrized costs. In addition, we used a combination heuristic to obtain a symmetric alignment. The results of these experiments are presented in Table 5 , lines 4-6 a/b.",
"cite_spans": [],
"ref_spans": [
{
"start": 455,
"end": 462,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Symmetric Alignments",
"sec_num": "5.4"
},
{
"text": "The performance of the one-sided MWEC algorithm turned out to be quite robust on both tasks. However, the o-MWEC alignments are not symmetric and the achieved low AER depends heavily on the differences between the involved languages, which may favor many-to-one alignments in one translation direction only. That is why on the Verbmobil task, when determining the mininum weight in each row for the translation direction from English to German, the alignment quality deteriorates, because the algorithm cannot produce alignments which map several English words to one German word (line 5b of Table 5 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 592,
"end": 599,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Symmetric Alignments",
"sec_num": "5.4"
},
{
"text": "Applying the generalization heuristics (line 6a/b of Table 5 ), we achieve an AER of 6.0% on the Canadian Hansards task when interpolating the state occupation probabilities trained with the HMM and with the IBM-4 schemes. On the Verbmobil task, the interpolation of the HMM and the Model 6 schemes yields the best result of 3.7% AER. In the latter experiment, we reached 97.3% precision and 95.2% recall.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Symmetric Alignments",
"sec_num": "5.4"
},
{
"text": "A description of the IBM models for statistical machine translation can be found in (Brown et al., 1993) . The HMM-based alignment model was introduced in (Vogel et al., 1996) . An overview of these models is given in (Och and Ney, 2003) . That article also introduces the Model 6; additionally, state-of-the-art results are presented for the Verbmobil task and the Canadian Hansards task for various configurations. Therefore, we chose them as baseline. Additional linguistic knowledge sources such as dependeny trees or parse trees were used in (Cherry and Lin, 2003; Gildea, 2003) . Bilingual bracketing methods were used to produce a word alignment in (Wu, 1997) . (Melamed, 2000) uses an alignment model that enforces one-to-one alignments for nonempty words. 8.4 6.9 7.8 --Hansards 2a. MWEC 7.9 9.3 7.5 8.2 7.4 3a. MWEC (global EW costs) 6.6 7.4 6.9 6.4 6.4 4a. o-MWEC T\u2192S 7.3 7.9 7.4 6.7 7.0 5a. S\u2192T 7.7 7.6 7.2 6.9 6.9 6a. S\u2194T (intersection) 7 ",
"cite_spans": [
{
"start": 84,
"end": 104,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
},
{
"start": 155,
"end": 175,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF8"
},
{
"start": 218,
"end": 237,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF7"
},
{
"start": 547,
"end": 569,
"text": "(Cherry and Lin, 2003;",
"ref_id": "BIBREF2"
},
{
"start": 570,
"end": 583,
"text": "Gildea, 2003)",
"ref_id": "BIBREF3"
},
{
"start": 656,
"end": 666,
"text": "(Wu, 1997)",
"ref_id": "BIBREF10"
},
{
"start": 669,
"end": 684,
"text": "(Melamed, 2000)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In this paper, we addressed the task of automatically generating symmetric word alignments for statistical machine translation. We exploited the state occupation probabilties derived from the IBM and HMM translation models. We used the negated logarithms of these probabilities as local alignment costs and reduced the word alignment problem to finding an edge cover with minimal costs in a bipartite graph. We presented efficient algorithms for the solution of this problem. We evaluated the performance of these algorithms by comparing the alignment quality to manual reference alignments. We showed that interpolating the alignment costs of the sourceto-target and the target-to-source translation directions can result in a significant improvement of the alignment quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "In the future, we plan to integrate the graph algorithms into the iterative training procedure. Investigating the usefulness of additional feature functions might be interesting as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "An edge cover of G is a set of edges E such that each node of G is incident to at least one edge in E .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Consequently, we will use them as baseline for the experiments with symmetric alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been partially funded by the EU project TransType 2, IST-2001-32091. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An inequality and associated maximization technique in statistical estimation for probabilistic functions of markov processes",
"authors": [
{
"first": "L",
"middle": [
"E"
],
"last": "Baum",
"suffix": ""
}
],
"year": 1972,
"venue": "Inequalities",
"volume": "3",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. E. Baum. 1972. An inequality and associated maximization technique in statistical estimation for probabilistic functions of markov processes. Inequalities, 3:1-8.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter esti- mation. Computational Linguistics, 19(2):263- 311, June.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A probability model to improve word alignment",
"authors": [
{
"first": "C",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the 41th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "88--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Cherry and D. Lin. 2003. A probability model to improve word alignment. In Proc. of the 41th Annual Meeting of the Association for Compu- tational Linguistics (ACL), pages 88-95, Sap- poro, Japan, July.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Loosely tree-based alignment for machine translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the 41th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "80--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Gildea. 2003. Loosely tree-based alignment for machine translation. In Proc. of the 41th An- nual Meeting of the Association for Computa- tional Linguistics (ACL), pages 80-87, Sapporo, Japan, July.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An efficient algorithm for minimum-weight bibranching",
"authors": [
{
"first": "J",
"middle": [],
"last": "Keijsper",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Pendavingh",
"suffix": ""
}
],
"year": 1998,
"venue": "Journal of Combinatorial Theory Series B",
"volume": "73",
"issue": "2",
"pages": "130--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Keijsper and R. Pendavingh. 1998. An effi- cient algorithm for minimum-weight bibranch- ing. Journal of Combinatorial Theory Series B, 73(2):130-145, July.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Models of translational equivalence among words",
"authors": [
{
"first": "I",
"middle": [
"D"
],
"last": "Melamed",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "2",
"pages": "221--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. D. Melamed. 2000. Models of translational equivalence among words. Computational Lin- guistics, 26(2):221-249.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of the 38th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och and H. Ney. 2000. Improved statistical alignment models. In Proc. of the 38th Annual Meeting of the Association for Computational Linguistics (ACL), pages 440-447, Hong Kong, October.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Och and H. Ney. 2003. A systematic com- parison of various statistical alignment models. Computational Linguistics, 29(1):19-51, March.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "HMMbased word alignment in statistical translation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1996,
"venue": "COLING '96: The 16th Int. Conf. on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "836--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vogel, H. Ney, and C. Tillmann. 1996. HMM- based word alignment in statistical translation. In COLING '96: The 16th Int. Conf. on Com- putational Linguistics, pages 836-841, Copen- hagen, Denmark, August.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Verbmobil: Foundations of speech-to-speech translations",
"authors": [],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Wahlster, editor. 2000. Verbmobil: Founda- tions of speech-to-speech translations. Springer Verlag, Berlin, Germany, July.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel cor- pora. Computational Linguistics, 23(3):377- 403, September.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "recall = |A \u2229 S| |S| , precision = |A \u2229 P | |A| AER(S, P ; A) = 1 \u2212 |A \u2229 S| + |A \u2229 P | |A| + |S|",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": "Verbmobil task: corpus statistics.",
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">Source/Target: German English</td></tr><tr><td>Train</td><td>Sentences</td><td>34 446</td></tr><tr><td/><td>Words</td><td colspan=\"2\">329 625 343 076</td></tr><tr><td/><td>Vocabulary</td><td>5 936</td><td>3 505</td></tr><tr><td/><td>Singletons</td><td>2 600</td><td>1 305</td></tr><tr><td colspan=\"2\">Dictionary Entries</td><td>4 404</td></tr><tr><td>Test</td><td>Sentences</td><td>354</td></tr><tr><td/><td>Words</td><td>3 233</td><td>3 109</td></tr><tr><td colspan=\"2\">S reference relations</td><td>2 559</td></tr><tr><td colspan=\"2\">P reference relations</td><td>4 596</td></tr></table>",
"num": null
},
"TABREF1": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">: Canadian Hansards: corpus statistics.</td></tr><tr><td/><td colspan=\"3\">Source/Target: French English</td></tr><tr><td>Train</td><td>Sentences</td><td colspan=\"2\">128K</td></tr><tr><td/><td>Words</td><td>2.12M</td><td>1.93M</td></tr><tr><td/><td colspan=\"2\">Vocabulary 37 542</td><td>29 414</td></tr><tr><td/><td>Singletons</td><td>12 986</td><td>9 572</td></tr><tr><td colspan=\"2\">Dictionary Entries</td><td colspan=\"2\">28 701</td></tr><tr><td>Test</td><td>Sentences</td><td>500</td></tr><tr><td/><td>Words</td><td>8 749</td><td>7 946</td></tr><tr><td colspan=\"2\">S reference relations</td><td colspan=\"2\">4 443</td></tr><tr><td colspan=\"2\">P reference relations</td><td colspan=\"2\">19 779</td></tr></table>",
"num": null
},
"TABREF2": {
"text": "AER [%] for non-symmetric alignment methods and for various models (HMM, IBM-4, Model 6) on the Canadian Hansards task.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Alignment method</td><td colspan=\"3\">HMM IBM4 M6</td></tr><tr><td>Baseline T\u2192S</td><td>14.1</td><td colspan=\"2\">12.9 11.9</td></tr><tr><td>S\u2192T</td><td>14.4</td><td colspan=\"2\">12.8 11.7</td></tr><tr><td>intersection</td><td>8.4</td><td>6.9</td><td>7.8</td></tr><tr><td>o-MWEC T\u2192S</td><td>14.0</td><td colspan=\"2\">13.1 11.9</td></tr><tr><td>S\u2192T</td><td>14.3</td><td colspan=\"2\">13.0 11.7</td></tr><tr><td>intersection</td><td>8.2</td><td>7.1</td><td>7.8</td></tr><tr><td colspan=\"4\">Table 4: AER [%] for non-symmetric align-</td></tr><tr><td colspan=\"4\">ment methods and for various models (HMM,</td></tr><tr><td colspan=\"4\">IBM-4, Model 6) on the Verbmobil task.</td></tr><tr><td colspan=\"4\">Alignment method HMM IBM4 M6</td></tr><tr><td>Baseline T\u2192S</td><td>7.6</td><td>4.8</td><td>4.6</td></tr><tr><td>S\u2192T</td><td>12.1</td><td>9.3</td><td>8.8</td></tr><tr><td>refined</td><td>7.1</td><td>4.7</td><td>4.7</td></tr><tr><td>o-MWEC T\u2192S</td><td>7.3</td><td>4.8</td><td>4.5</td></tr><tr><td>S\u2192T</td><td>12.0</td><td>9.3</td><td>8.5</td></tr><tr><td>refined</td><td>6.7</td><td>4.6</td><td>4.6</td></tr></table>",
"num": null
},
"TABREF3": {
"text": "AER[%] for different alignment symmetrization methods and for various alignment models on the Canadian Hansards and the Verbmobil tasks (MWEC: minimum weight edge cover, EW: empty word).",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Symmetrization Method</td><td>HMM IBM4 M6 HMM + IBM4 HMM + M6</td></tr><tr><td>Canadian 1a. Baseline (intersection)</td><td/></tr></table>",
"num": null
}
}
}
}