Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C04-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:19:08.904590Z"
},
"title": "Word to word alignment strategies",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University",
"location": {
"settlement": "Uppsala",
"country": "Sweden"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word alignment is a challenging task aiming at the identification of translational relations between words and multi-word units in parallel corpora. Many alignment strategies are based on links between single words. Different strategies can be used to find the optimal word alignment using such one-toone word links including relations between multi-word units. In this paper seven algorithms are compared using a word alignment approach based on association clues and an English-Swedish bitext together with a handcrafted reference alignment used for evaluation.",
"pdf_parse": {
"paper_id": "C04-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "Word alignment is a challenging task aiming at the identification of translational relations between words and multi-word units in parallel corpora. Many alignment strategies are based on links between single words. Different strategies can be used to find the optimal word alignment using such one-toone word links including relations between multi-word units. In this paper seven algorithms are compared using a word alignment approach based on association clues and an English-Swedish bitext together with a handcrafted reference alignment used for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word alignment is the task of identifying translational relations between words in parallel corpora with the aim of re-using them in natural language processing. Typical applications that make use of word alignment techniques are machine translation and multi-lingual lexicography. Several approaches have been proposed for the automatic alignment of words and phrases using statistical techniques and alignment heuristics, e.g. (Brown et al., 1993; Vogel et al., 1996; Garc\u00eda-Varea et al., 2002; Ahrenberg et al., 1998; Tiedemann, 1999; Tufis and Barbu, 2002; Melamed, 2000) . Word alignment usually includes links between so-called multi-word units (MWUs) in cases where lexical items cannot be split into separated words with appropriate translations in another language. See for example the alignment between an English sentence and a Swedish sentence illustrated in figure 1. There are MWUs in both languages aligned to corresponding translations in the other language. The Swedish compound \"mittplatsen\" corresponds to three words in English (\"the middle seat\") and the English verb \"dislike\" is translated into a Swedish particle verb \"tycker om\" (English: like) that has been negated using \"inte\". Most approaches model Jag tar mittplatsen, vilket jag inte tycker om, men det g\u00f6r mig inte s\u00e5 mycket.",
"cite_spans": [
{
"start": 429,
"end": 449,
"text": "(Brown et al., 1993;",
"ref_id": "BIBREF4"
},
{
"start": 450,
"end": 469,
"text": "Vogel et al., 1996;",
"ref_id": "BIBREF18"
},
{
"start": 470,
"end": 496,
"text": "Garc\u00eda-Varea et al., 2002;",
"ref_id": null
},
{
"start": 497,
"end": 520,
"text": "Ahrenberg et al., 1998;",
"ref_id": "BIBREF0"
},
{
"start": 521,
"end": 537,
"text": "Tiedemann, 1999;",
"ref_id": "BIBREF15"
},
{
"start": 538,
"end": 560,
"text": "Tufis and Barbu, 2002;",
"ref_id": "BIBREF17"
},
{
"start": 561,
"end": 575,
"text": "Melamed, 2000)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "I take the middle seat, which I dislike, but I am not really put out.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bellow \"To Jerusalem and back: a personal account\" (Bellow, 1976) and its Swedish translation (Bellow, 1977) (the Bellow corpus).",
"cite_spans": [
{
"start": 51,
"end": 65,
"text": "(Bellow, 1976)",
"ref_id": "BIBREF2"
},
{
"start": 94,
"end": 108,
"text": "(Bellow, 1977)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1: A word alignment example from Saul",
"sec_num": null
},
{
"text": "word alignment as links between words in the source language and words in the target language as indicated by the arrows in figure 1. However, in cases like the English expression \"I am not really put out\" which corresponds to the Swedish expression \"det g\u00f6r mig inte s\u00e5 mycket\" there is no proper way of connecting single words with each other in order to express this relation. In some approaches such relations are constructed in form of an exhaustive set of links between all word pairs included in both expressions (Melamed, 1998; Mihalcea and Pedersen, 2003) . In other approaches complex expressions are identified in a pre-processing step in order to handle them as complex units in the same manner as single words in alignment (Smadja et al., 1996; Ahrenberg et al., 1998; Tiedemann, 1999) .",
"cite_spans": [
{
"start": 520,
"end": 535,
"text": "(Melamed, 1998;",
"ref_id": "BIBREF7"
},
{
"start": 536,
"end": 564,
"text": "Mihalcea and Pedersen, 2003)",
"ref_id": null
},
{
"start": 736,
"end": 757,
"text": "(Smadja et al., 1996;",
"ref_id": "BIBREF14"
},
{
"start": 758,
"end": 781,
"text": "Ahrenberg et al., 1998;",
"ref_id": "BIBREF0"
},
{
"start": 782,
"end": 798,
"text": "Tiedemann, 1999)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1: A word alignment example from Saul",
"sec_num": null
},
{
"text": "The one-to-one word linking approach seems to be very limited. However, single word links can be combined in order to describe links between multi-word units as illustrated in figure 1. In this paper we investigate different alignment strategies using this approach 1 . For this we apply clue alignment introduced in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1: A word alignment example from Saul",
"sec_num": null
},
{
"text": "The clue alignment approach has been presented in (Tiedemann, 2003) . Alignment clues represent probabilistic indications of associa-tions between lexical items collected from different sources. Declarative clues can be taken from linguistic resources such as bilingual dictionaries. They may also include pre-defined relations between lexical items based on certain features such as parts of speech. Estimated clues are derived from the parallel data using, for example, measures of co-occurrence (e.g. the Dice coefficient (Smadja et al., 1996) ), statistical alignment models (e.g. IBM models from statistical machine translation (Brown et al., 1993) ), or string similarity measures (e.g. the longest common sub-sequence ratio (Melamed, 1995) ). They can also be learned from previously aligned training data using linguistic and contextual features associated with aligned items. Relations between certain word classes with respect to the translational association of words belonging to these classes is one example of such clues that can be learned from aligned training data. In our experiments, for example, we will use clues that indicate relations between lexical items based on their part-of-speech tags and their positions in the sentence relative to each other. They are learned from automatically word-aligned training data.",
"cite_spans": [
{
"start": 50,
"end": 67,
"text": "(Tiedemann, 2003)",
"ref_id": "BIBREF16"
},
{
"start": 525,
"end": 546,
"text": "(Smadja et al., 1996)",
"ref_id": "BIBREF14"
},
{
"start": 633,
"end": 653,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF4"
},
{
"start": 731,
"end": 746,
"text": "(Melamed, 1995)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word alignment with clues",
"sec_num": "2"
},
{
"text": "The clue alignment approach implements a way of combining association indicators on a word-to-word level. The combination of clues results in a two-dimensional clue matrix. The values in this matrix express the collected evidence of an association between word pairs in bitext segments taken from a parallel corpus. Word alignment is then the task of identifying the best links according to the associations indicated in the clue matrix. Several strategies for such an alignment are discussed in the following section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word alignment with clues",
"sec_num": "2"
},
{
"text": "A clue matrix summarizes information from various sources that can be used for the identification of translation relations. However, there is no obvious way to utilize this information for word alignment as we explicitly include multiword units (MWUs) in our approach. The clue matrix in figure 2 has been obtained for a bitext segment from our English-Swedish test corpus (the Bellow corpus) using a set of weighted declarative and estimated clues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment strategies",
"sec_num": "3"
},
{
"text": "There are many ways of \"clustering\" words together and there is no obvious maximization procedure for finding the alignment optimum when MWUs are involved. Figure 2: A clue matrix (all values in %).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment strategies",
"sec_num": "3"
},
{
"text": "cedure depends very much on the definition of an optimal alignment. The best alignment for our example would probably be the set of the following links:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment strategies",
"sec_num": "3"
},
{
"text": "links =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment strategies",
"sec_num": "3"
},
{
"text": "no one ingen is patient visar t\u00e5lamod very s\u00e4rskilt mycket",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment strategies",
"sec_num": "3"
},
{
"text": "A typical procedure for automatic word alignment is to start with one-to-one word links.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment strategies",
"sec_num": "3"
},
{
"text": "Links that have common source or target language words are called overlapping links. Sets of overlapping links, which do not overlap with any other link outside the set, are called link clusters (LC). Aligning words one by one often produces overlaps and in this way implicitly creates aligned multi-word-units as part of link clusters. A general word-to-word alignment L for a given bitext segment with N source language words (s 1 s 2 ...s N ) and M target language words (t 1 t 2 ...t M ) can be formally described as a set of links",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment strategies",
"sec_num": "3"
},
{
"text": "L = {L 1 , L 2 , ..., L x } with L x = [s x 1 , t x 2 ] , x 1 \u2208 {1..N }, x 2 \u2208 {1..M }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment strategies",
"sec_num": "3"
},
{
"text": "This general definition allows varying numbers of links (0 \u2264 x \u2264 N * M ) within possible alignments L. It is not straightforward how to find the optimal alignment as L may include different numbers of links.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment strategies",
"sec_num": "3"
},
{
"text": "One word-to-word alignment approach is to assume a directional word alignment model similar to the models in statistical machine translation. The directional alignment model assumes that there is at most one link for each source language word. Using alignment clues, this can be expressed as the following optimization problem:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Directional alignment models",
"sec_num": "3.1"
},
{
"text": "L D = argmax L D N n=1 C(L D n ) where L D = {L D 1 , L D 2 , .., L D N } is a set of links L D n = s n , t a D n with a D n \u2208 {1..M } and C(L D n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Directional alignment models",
"sec_num": "3.1"
},
{
"text": "is the combined clue value for the linked items s n and t a D n . In other words, word alignment is the search for the best link for each source language word. Directional models do not allow multiple links from one item to several target items. However, target items can be linked to multiple source language words as they can be aligned to the same target language word. The direction of alignment can easily be reversed, which leads to the inverse directional alignment:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Directional alignment models",
"sec_num": "3.1"
},
{
"text": "L I = argmax L I M m=1 C(L I m ) with links L I m = s a I m , t m and a I m \u2208 {1..N }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Directional alignment models",
"sec_num": "3.1"
},
{
"text": "In the inverse directional alignment, source language words can be linked to multiple words but not the other way around. The following figure illustrates directional alignment models applied to the example in figure 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Directional alignment models",
"sec_num": "3.1"
},
{
"text": "L D = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 no ingen one ingen is visar very s\u00e4rskilt patient mycket \uf8fc \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8fe LC D = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 no one ingen is visar very s\u00e4rskilt patient mycket \uf8fc \uf8f4 \uf8fd \uf8f4 \uf8fe",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Directional alignment models",
"sec_num": "3.1"
},
{
"text": "Using the inverse directional alignment strategy we would obtain the following links: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Directional alignment models",
"sec_num": "3.1"
},
{
"text": "L I = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Directional alignment models",
"sec_num": "3.1"
},
{
"text": "Directional link sets can be combined in several ways. The union of link sets (L \u222a =L D \u222aL I ) usually causes many overlaps and, hence, very large link clusters. On the other hand, an intersection of link sets (L \u2229 =L D \u2229L I ) removes all overlaps and leaves only highly confident one-toone word links behind. Using the same example from above we obtain the following alignments: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined directional alignment",
"sec_num": "3.2"
},
{
"text": "L \u222a = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined directional alignment",
"sec_num": "3.2"
},
{
"text": "LC \u2229 =L \u2229",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined directional alignment",
"sec_num": "3.2"
},
{
"text": "The union and the intersection of links do not produce satisfactory results as seen in the example. Another alignment strategy is a refined combination of link sets (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined directional alignment",
"sec_num": "3.2"
},
{
"text": "L R = {L D \u2229L I } \u222a {L R 1 , ..., L R r })",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined directional alignment",
"sec_num": "3.2"
},
{
"text": "as suggested by (Och and Ney, 2000b) . In this approach, the intersection of links is iteratively extended by additional links L R r which pass one of the following two constraints:",
"cite_spans": [
{
"start": 16,
"end": 36,
"text": "(Och and Ney, 2000b)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combined directional alignment",
"sec_num": "3.2"
},
{
"text": "\u2022 A new link is accepted if both items in the link are not yet aligned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined directional alignment",
"sec_num": "3.2"
},
{
"text": "\u2022 Mapped on a two-dimensional bitext space, the new link is either vertically or horizontally adjacent to an existing link and the new link does not cause any link to be adjacent to other links in both dimensions (horizontally and vertically).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined directional alignment",
"sec_num": "3.2"
},
{
"text": "Applying this approach to the example, we get: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined directional alignment",
"sec_num": "3.2"
},
{
"text": "L R = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined directional alignment",
"sec_num": "3.2"
},
{
"text": "Another alignment approach is the competitive linking approach proposed by Melamed (Melamed, 1996) . In this approach, one assumes that there are only one-to-one word links. The alignment is done in a greedy \"bestfirst\" search manner where links with the highest association scores are aligned first, and the aligned items are then immediately removed from the search space. This process is repeated until no more links can be found. In this way, the optimal alignment (L C ) for nonoverlapping one-to-one links is found. The number of possible links in an alignment is reduced to min(N, M ). Using competitive linking with our example we yield:",
"cite_spans": [
{
"start": 83,
"end": 98,
"text": "(Melamed, 1996)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Competitive linking",
"sec_num": "3.3"
},
{
"text": "L C = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 no ingen very s\u00e4rskilt is visar one t\u00e5lamod patient mycket \uf8fc \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8fe LC C =L C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Competitive linking",
"sec_num": "3.3"
},
{
"text": "Another iterative alignment approach has been proposed in (Tiedemann, 2003) . In this approach, the link",
"cite_spans": [
{
"start": 58,
"end": 75,
"text": "(Tiedemann, 2003)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained best-first alignment",
"sec_num": "3.4"
},
{
"text": "L B x = [s x 1 , t x 2 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained best-first alignment",
"sec_num": "3.4"
},
{
"text": "with the highest score in the clue matrix\u0108(s x 1 , t x 2 ) = max s i ,t j (C(s i , t j )) is added to the set of link clusters if it fulfills certain constraints. The top score is removed from the matrix (i.e. set to zero) and the link search is repeated until no more links can be found. This is basically a constrained best-first search. Several constraints are possible. In (Tiedemann, 2003) an adjacency check is suggested, i.e. overlapping links are accepted only if they are adjacent to other links in one and only one existing link cluster. Non-overlapping links are always accepted (i.e. a non-overlapping link creates a new link cluster). Other possible constraints are clue value thresholds, thresholds for clue score differences between adjacent links, or syntactic constraints (e.g. that link clusters may not cross phrase boundaries). Using a best-first search strategy with the adjacency constraint we obtain the following alignment: ",
"cite_spans": [
{
"start": 377,
"end": 394,
"text": "(Tiedemann, 2003)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained best-first alignment",
"sec_num": "3.4"
},
{
"text": "L B = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained best-first alignment",
"sec_num": "3.4"
},
{
"text": "None of the alignment approaches described above produces the preferred reference alignment in our example using the given clue matrix. However, simple iterative procedures come very close to the reference and produce acceptable alignments even for multi-word units, which is promising for an automatic clue alignment system. Directional alignment models depend very much on the relation between the source and the target language. One direction usually works better than the other, e.g. an alignment from English to Swedish is better than Swedish to English because in English terms and concepts are often split into several words whereas Swedish tends to contain many compositional compounds. Symmetric approaches to word alignment are certainly more appropriate for general alignment systems than directional ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "3.5"
},
{
"text": "Word alignment quality is usually measured in terms of precision and recall. Often, previously created gold standards are used as reference data in order to simplify automatic tests of alignment attempts. Gold standards can be re-used for additional test runs which is important when examining different parameter settings. However, recall and precision derived from information retrieval have to be adjusted for the task of word alignment. The main difficulty with these measures in connection with word alignment arises with links between MWUs that cause partially correct alignments. It is not straightforward how to judge such links in order to compute precision and recall. In order to account for partiality we use a slightly modified version of the partiality score Q proposed in (Ahrenberg et al., 2000) 2 :",
"cite_spans": [
{
"start": 787,
"end": 811,
"text": "(Ahrenberg et al., 2000)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methodology",
"sec_num": "4"
},
{
"text": "Q precision x = |alg x src \u2229 corr x src | + |alg x trg \u2229 corr x trg | |alg x src | + |alg x trg | Q recall x = |alg x src \u2229 corr x src | + |alg x trg \u2229 corr x trg | |corr x src | + |corr x trg | The set of alg x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methodology",
"sec_num": "4"
},
{
"text": "src includes all source language words of all proposed links if at least one of them is partially correct with respect to the reference link x from the gold standard. Similarly, alg x trg refers to all the proposed target language words. corr x src and corr x trg refer to the sets of source and target language words in link x of the gold standard. Using the partiality value Q, we can define the recall and precision metrics as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methodology",
"sec_num": "4"
},
{
"text": "R mwu = X x=1 Q recall x |correct| , P mwu = X x=1 Q precision x |aligned|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methodology",
"sec_num": "4"
},
{
"text": "A balanced F-score can be used to combine both, precision and recall: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methodology",
"sec_num": "4"
},
{
"text": "F mwu = (2 * P mwu * R mwu )/(P mwu + R mwu ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methodology",
"sec_num": "4"
},
{
"text": "giza+pp. Alignment strategies: directional (L D ), inverse directional (L I ), union (L \u222a ), intersec- tion (L \u2229 ), refined (L R )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methodology",
"sec_num": "4"
},
{
"text": ", competitive linking (L C ), and constrained best-first (L B ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methodology",
"sec_num": "4"
},
{
"text": "Alternative measures for the evaluation of one-to-one word links have been proposed in (Och and Ney, 2000a; Och and Ney, 2003) . However, these measures require completely aligned bitext segments as reference data. Our gold standards include random samples from the corpus instead (Ahrenberg et al., 2000) . Furthermore, we do not split MWU links as proposed by (Och and Ney, 2000a) . Therefore, the measures proposed above are a natural choice for our evaluations.",
"cite_spans": [
{
"start": 87,
"end": 107,
"text": "(Och and Ney, 2000a;",
"ref_id": "BIBREF10"
},
{
"start": 108,
"end": 126,
"text": "Och and Ney, 2003)",
"ref_id": "BIBREF12"
},
{
"start": 281,
"end": 305,
"text": "(Ahrenberg et al., 2000)",
"ref_id": "BIBREF1"
},
{
"start": 362,
"end": 382,
"text": "(Och and Ney, 2000a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methodology",
"sec_num": "4"
},
{
"text": "Several alignment search strategies have been discussed in the previous sections. Our clue aligner implements these strategies in order to test their impact on the alignment performance. In the experiments we used one of our English-Swedish bitext from the PLUG corpus (S\u00e5gvall Hein, 2002) , the novel \"To Jerusalem and back: A personal account\" by Saul Bellow. This corpus is fairly small (about 170,000 words) and therefore well suited for extensive studies of alignment parameters. For evaluation, a gold standard of 468 manually aligned links is used (Merkel et al., 2002) . It includes 122 links with MWUs either on the source or on the target side (= 26% of the gold standard). 109 links contain source language MWUs, 59 links target language MWUs, and 46 links MWUs in both languages. 10 links are null links, i.e. a link of one word to an empty string. Three different clue types are used for the alignment: the Dice coefficient (dice), lexical translation probabilities derived from statistical translation models (giza) using the GIZA++ toolbox (Och and Ney, 2003) , and, finally, POS/relative-wordposition-clues learned from previous alignments (pp). Alignment strategies are compared on the basis of three different settings: dice+pp, giza, and giza+pp. In figure 3, the alignment results are shown for the three clue settings using different search strategies as discussed earlier. Figure 3 illustrates the relation between precision and recall when applying different algorithms. As expected, the intersection of directional alignment strategies yields the highest precision at the expense of recall, which is generally lower than for the other approaches. Contrary to the intersection, the union of directional links produces alignments with the highest recall values but lower precision than all other search algorithms. Too many (partially) incorrect MWUs are included in the union of directional links. The intersection on the other hand includes only one-to-one word links that tend to be correct. However, many links are missed in this strategy evident in the low recall val- ues. Directional alignment strategies generally yield lower F-values than other refined symmetric alignment strategies. Their implementation is straightforward but the results are highly dependent on the language pair under consideration. The differences between the two alignment directions in our example are surprisingly inconsistent. Using the giza clues both alignment results are very close in terms of precision and recall whereas a larger difference can be observed using the other two clue settings when applying different directional alignment strategies. Competitive linking is somewhat in between the intersection approach and the two symmetric approaches, \"best-first\" and \"refined\". This could also be expected as competitive linking only allows non-overlapping one-toone word links. The refined bi-directional alignment approach and the constrained best-first approach are almost identical in our examples with a more or less balanced relation between precision and recall. One advantage of the bestfirst approach is the possibility of incorporating different constraints that suit the current task. The adjacency check is just one of the possible constraints. For example, syntactic criteria could be applied in order to force linked items to be complete according to existing syntactic markup. Non-contiguous elements could also be identified using the same approach simply by removing the adjacency constraint. However, this seems to increase the noise significantly according to experiments not shown in this paper. Further investigations on optimizing alignment constraints for certain tasks have to be done in the future. Focusing on MWUs, the numbers in table 1 show a clear picture about the difficulties of all approaches to find correct MWU links. Symmetric alignment strategies like refined and best-first produce in general the best results for MWU links. However, the main portion of such links is only partially correct even for these approaches. Using our partiality measure, the intersection of directional alignments still produces the highest precision values when considering MWU links only even though no MWUs are included in these alignments at all. The best results among MWU links are achieved for the ones including MWUs in both languages. However, these results are still significantly lower than for single-word links (non-MWU).",
"cite_spans": [
{
"start": 269,
"end": 289,
"text": "(S\u00e5gvall Hein, 2002)",
"ref_id": "BIBREF13"
},
{
"start": 555,
"end": 576,
"text": "(Merkel et al., 2002)",
"ref_id": "BIBREF9"
},
{
"start": 1055,
"end": 1074,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 1395,
"end": 1403,
"text": "Figure 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "According to our results different alignment strategies can be chosen to suit particular needs. Concluding from the experiments, restrictive methods like the intersection of directional alignments or competitive linking should be chosen if results with high precision are required (which are mostly found among oneto-one word links). This is, for example, the case in automatic extraction of bilingual lexicons where noise should be avoided as much as possible. A strong disadvantage of these approaches is that they do not include MWUs at all. Other strategies should be chosen for applications, which require a comprehensive coverage as, for example, machine translation. Symmetric approaches such as the refined combination of directional alignments and the constrained best-first alignment strategy yield the highest overall performance. They produce the best balance between precision and recall and the highest scores in terms of F-values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "A similar study on statistical alignment models is included in(Och and Ney, 2003).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Qx \u2261 0 for incorrect links for both, precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A simple hybrid aligner for generating lexical correspondences in parallel texts",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Ahrenberg",
"suffix": ""
},
{
"first": "Magnus",
"middle": [],
"last": "Merkel",
"suffix": ""
},
{
"first": "Mikael",
"middle": [],
"last": "Andersson",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "29--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Ahrenberg, Magnus Merkel, and Mikael Andersson. 1998. A simple hybrid aligner for generating lexical correspondences in parallel texts. In Christian Boitet and Pete White- lock, editors, Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Con- ference on Computational Linguistics, pages 29-35, Montreal, Canada.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Evaluation of word alignment systems",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Ahrenberg",
"suffix": ""
},
{
"first": "Magnus",
"middle": [],
"last": "Merkel",
"suffix": ""
},
{
"first": "Anna",
"middle": [
"S\u00e5gvall"
],
"last": "Hein",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 2nd International Conference on Language Resources and Evaluation",
"volume": "III",
"issue": "",
"pages": "1255--1261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Ahrenberg, Magnus Merkel, Anna S\u00e5gvall Hein, and J\u00f6rg Tiedemann. 2000. Evaluation of word alignment systems. In Proceedings of the 2nd International Conference on Lan- guage Resources and Evaluation, volume III, pages 1255-1261, Athens, Greece.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "From Jerusalem and back: a personal account",
"authors": [
{
"first": "Saul",
"middle": [],
"last": "Bellow",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saul Bellow. 1976. From Jerusalem and back: a personal account. The Viking Press, New York, USA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Jerusalem tur och retur. Bonniers, Stockholm. Translation of Caj Lundgren",
"authors": [
{
"first": "Saul",
"middle": [],
"last": "Bellow",
"suffix": ""
}
],
"year": 1977,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saul Bellow. 1977. Jerusalem tur och retur. Bonniers, Stockholm. Translation of Caj Lundgren.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving alignment quality in statistical machine translation using context-dependent maximum entropy models",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 19th International Conference on Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "1051--1054",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vin- cent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistcal machine translation: Parameter estimation. Compu- tational Linguistics, 19(2):263-311, June. Ismael Garc\u00eda-Varea, Franz Josef Och, Her- mann Ney, and Francisco Casacuberta. 2002. Improving alignment quality in statistical machine translation using context-dependent maximum entropy models. In Proceedings of the 19th International Conference on Compu- tational Linguistics, pages 1051-1054, Taipei, Taiwan, August.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic evaluation and uniform filter cascades for inducing nbest translation lexicons",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 3rd Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "184--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Melamed. 1995. Automatic evaluation and uniform filter cascades for inducing n- best translation lexicons. In David Yarovsky and Kenneth Church, editors, Proceedings of the 3rd Workshop on Very Large Corpora, pages 184-198, Boston, MA. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic construction of clean broad-coverage lexicons",
"authors": [
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 2nd Conference the Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "125--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Dan Melamed. 1996. Automatic construction of clean broad-coverage lexicons. In Proceed- ings of the 2nd Conference the Association for Machine Translation in the Americas, pages 125-134, Montreal, Canada.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Annotation style guide for the Blinker project",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Melamed. 1998. Annotation style guide for the Blinker project, version 1.0. IRCS Technical Report 98-06, University of Penn- sylvania, Philadelphia, PA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Models of translational equivalence among words",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "2",
"pages": "221--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Melamed. 2000. Models of transla- tional equivalence among words. Computa- tional Linguistics, 26(2):221-249, June.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The PLUG link annotator -interactive construction of data from parallel corpora",
"authors": [
{
"first": "Magnus",
"middle": [],
"last": "Merkel",
"suffix": ""
},
{
"first": "Mikael",
"middle": [],
"last": "Andersson",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Ahrenberg",
"suffix": ""
}
],
"year": 1999,
"venue": "Workshop on Building and Using Parallel Texts: Data Driven Machine Translation and Beyond",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Magnus Merkel, Mikael Andersson, and Lars Ahrenberg. 2002. The PLUG link annotator -interactive construction of data from par- allel corpora. In Lars Borin, editor, Parallel Corpora, Parallel Worlds. Rodopi, Amster- dam, New York. Proceedings of the Sympo- sium on Parallel Corpora, Department of Lin- guistics, Uppsala University, Sweden,1999. Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In Workshop on Building and Using Parallel Texts: Data Driven Machine Translation and Beyond, pages 1-10, Edmonton, Canada, May.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A comparison of alignment models for statistical machine translation",
"authors": [
{
"first": "Franz-Josef",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1086--1090",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz-Josef Och and Hermann Ney. 2000a. A comparison of alignment models for sta- tistical machine translation. In Proceed- ings of the 18th International Conference on Computational Linguistics, pages 1086-1090, Saarbr\u00fccken, Germany, July.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of the 38th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2000b. Im- proved statistical alignment models. In Proc. of the 38th Annual Meeting of the Associ- ation for Computational Linguistics, pages 440-447.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguis- tics, 29(1):19-51.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Aims and achievements",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "S\u00e5gvall Hein",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Symposium on Parallel Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna S\u00e5gvall Hein. 2002. The PLUG project: Parallel corpora in Link\u00f6ping, Uppsala, and G\u00f6teborg: Aims and achievements. In Lars Borin, editor, Parallel Corpora, Parallel Worlds. Rodopi, Amsterdam, New York. Proceedings of the Symposium on Parallel Corpora, Department of Linguistics, Uppsala University, Sweden,1999.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Translating collocations for bilingual lexicons: A statistical approach",
"authors": [
{
"first": "A",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Smadja",
"suffix": ""
},
{
"first": "Vasileios",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank A. Smadja, Kathleen R. McKeown, and Vasileios Hatzivassiloglou. 1996. Translating collocations for bilingual lexicons: A statis- tical approach. Computational Linguistics, 22(1), pages 1-38.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Word alignmentstep by step",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 12th Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "216--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 1999. Word alignment - step by step. In Proceedings of the 12th Nordic Conference on Computational Lin- guistics, pages 216-227, University of Trond- heim, Norway.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Combining clues for word alignment",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "339--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2003. Combining clues for word alignment. In Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 339-346, Budapest, Hungary, April.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Lexical token alignment: Experiments, results and applications",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Tufis",
"suffix": ""
},
{
"first": "Ana-Maria",
"middle": [],
"last": "Barbu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings from The 3rd International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "458--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Tufis and Ana-Maria Barbu. 2002. Lexical token alignment: Experiments, results and applications. In Proceedings from The 3rd International Conference on Language Re- sources and Evaluation, pages 458-465, Las Palmas, Spain.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "HMM-based word alignment in statistical translation",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th International Confernece on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "836--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In Proceedings of the 16th International Confernece on Compu- tational Linguistics, pages 836-841, Copen- hagen, Denmark.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Different alignment search strategies. Clue alignment settings: dice+pp, giza, and"
},
"TABREF6": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Evaluations of different link types for the setting giza+pp.",
"html": null
}
}
}
}