|
{ |
|
"paper_id": "E03-1026", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:25:08.988572Z" |
|
}, |
|
"title": "Combining Clues for Word Alignment", |
|
"authors": [ |
|
{ |
|
"first": "Rirg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Uppsala University", |
|
"location": { |
|
"postBox": "Box 527", |
|
"postCode": "SE-751 20", |
|
"settlement": "Uppsala", |
|
"country": "Sweden" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, a word alignment approach is presented which is based on a combination of clues. Word alignment clues indicate associations between words and phrases. They can be based on features such as frequency, part-of-speech, phrase type, and the actual wordform strings. Clues can be found by calculating similarity measures or learned from word aligned data. The clue alignment approach, which is proposed in this paper, makes it possible to combine association clues taking different kinds of linguistic information into account. It allows a dynamic tokenization into token units of varying size. The approach has been applied to an English/Swedish parallel text with promising results.", |
|
"pdf_parse": { |
|
"paper_id": "E03-1026", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, a word alignment approach is presented which is based on a combination of clues. Word alignment clues indicate associations between words and phrases. They can be based on features such as frequency, part-of-speech, phrase type, and the actual wordform strings. Clues can be found by calculating similarity measures or learned from word aligned data. The clue alignment approach, which is proposed in this paper, makes it possible to combine association clues taking different kinds of linguistic information into account. It allows a dynamic tokenization into token units of varying size. The approach has been applied to an English/Swedish parallel text with promising results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Parallel corpora carry a huge amount of bilingual lexical information. Word alignment approaches focus on the automatic identification of translation relations in translated texts. Alignments are usually represented as a set of links between words and phrases of source and target language segments. An alignment can be complete, i.e. all items in both segments have been linked to corresponding items in the other language, or incomplete, otherwise. Alignments may include \"null links\" which can be modeled as links to an \"empty element\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In word alignment, we have to", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 find an appropriate model M for the alignment of source and target language texts (modeling)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 estimate parameters of the model M, e.g. from empirical data (parameter estimation)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 find the optimal alignment of words and phrases for a given translation according to the model M and its parameters (alignment recovery).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Modeling the relations between lexical units of translated texts is not a trivial task due to the diversity of natural languages. There are generally two approaches, the estimation approach which is used in, e.g., statistical machine translation, and the association approach which is used in, e.g., automatic extraction of bilingual terminology. In the estimation approach, alignment parameters are modeled as hidden parameters in a statistical translation model . Association approaches base the alignment on similarity measures and association tests such as Dice scores (Smadj a et al., 1996; Tiedemann, 1999) , t-scores (Ahrenberg et al., 1998) log-likelihood measures (Tufis and Barbu, 2002) , and longest common subsequence ratios (Melamed, 1995) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 573, |
|
"end": 595, |
|
"text": "(Smadj a et al., 1996;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 612, |
|
"text": "Tiedemann, 1999)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 624, |
|
"end": 648, |
|
"text": "(Ahrenberg et al., 1998)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 673, |
|
"end": 696, |
|
"text": "(Tufis and Barbu, 2002)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 737, |
|
"end": 752, |
|
"text": "(Melamed, 1995)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One of the main difficulties in all alignment strategies is the identification of appropriate units in the source and the target language to be aligned. This task is hard even for human experts as can be seen in the detailed guidelines which are required for manual alignments (Merkel, 1999; Melamed, 1998) . Many translation relations involve multiword units such as phrasal compounds, idiomatic expressions, and complex terms. Syntactic shifts can also require the consideration of a context larger than a single word. Some items are not translated at all. Splitting source and target language texts into appropriate units for alignment (henceforth: tokenization) is often not possible without considering the translation relations. In other words, initial tokenization borders may change when the translation relations are investigated. Human aligners frequently expand token units when aligning sentences manually depending on the context (Ahrenberg et al., 2002) . Previous approaches use either iterative procedures to re-estimate alignment parameters (Smadja et al., 1996; Melamed, 1997; Vogel et al., 2000) or preprocessing steps for the identification of token Ngrams (Ahrenberg et al., 1998; Tiedemann, 1999) . In our approach, we combine simple techniques for prior tokenization with dynamic techniques during the alignment phase.", |
|
"cite_spans": [ |
|
{ |
|
"start": 277, |
|
"end": 291, |
|
"text": "(Merkel, 1999;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 306, |
|
"text": "Melamed, 1998)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 943, |
|
"end": 967, |
|
"text": "(Ahrenberg et al., 2002)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1058, |
|
"end": 1079, |
|
"text": "(Smadja et al., 1996;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1080, |
|
"end": 1094, |
|
"text": "Melamed, 1997;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1095, |
|
"end": 1114, |
|
"text": "Vogel et al., 2000)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1177, |
|
"end": 1201, |
|
"text": "(Ahrenberg et al., 1998;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1202, |
|
"end": 1218, |
|
"text": "Tiedemann, 1999)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The second problem of traditional word alignment approaches is the fact that parameter estimations are usually based on plain text items only. Linguistic data, which could be used to identify associations between lexical items are often ignored. Linguistic tools such as part-of-speech taggers, (shallow) parsers, named-entity recognizers become more and more robust and available for more languages. Linguistic information including contextual features could be used to improve alignment strategies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The third problem, alignment recovery, is a search problem. Using the alignment model and its parameters, we have to find the optimal alignment for a given pair of source and target language segments. In (Hiemstra, 1998) , the author points out that a sentence pair with a maximum of n token units in both sentences has n! possible alignments in a simple directed alignment model with a fixed tokenization. Furthermore, a search strategy becomes very complex if we allow dyn am i c tokeni zati on borders (overlapping N-gram s, inclusions), which leads us not only to a larger number of possible combinations but also to the problem of comparing alignments with variable length (number of links)", |
|
"cite_spans": [ |
|
{ |
|
"start": 204, |
|
"end": 220, |
|
"text": "(Hiemstra, 1998)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The clue alignment approach, which we propose here, addresses the three problems which were mentioned above. The approach allows the combination of association measures for any features of translation units of varying size. Overlapping units are allowed as well as inclusions. Association scores are organized in a clue matrix and we present a simple approach for approximating the optimal alignment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Section 2 describes the clue alignment model and ways of estimating parameters from association scores. Section 3 introduces the alignment approach which is based on word alignment clues. Section 4 gives examples of learning clues from previous alignments. Section 5 summarizes alignment experiments and, finally, section 6 contains conclusions and a discussion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The following English/Swedish sentence pair has been taken from the PLUG corpus (Shgvall Hein, 1999):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The corridors are jumping with them. Korridorerna myllrar av dem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The task for an aligner is now to find all the links between the lexical items in English and the lexical items in Swedish. The natural way of doing this for a human is to use various kinds of information, clues. Even without knowing either of the two languages, a human aligner would find a strong similarity between corridors and korridorema which leads to the conclusion of a possible relation between these two words. Similarly, a relation could be seen between them and dem. 1 In a second step, the aligner might use frequency counts of words in both languages and cooccurrence frequencies for some interesting word pairs. The frequency table above gives the aligner an additional clue for an association between corridors and korridorema and also some ideas about the relation between them and dem but not much about the remaining words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 480, |
|
"end": 481, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Finally, the aligner might apply off-the-shelf part-of-speech taggers and shallow syntactic analyzers. The aligner might look up the descriptions of the English tag set and finds out that NNS is the label for a plural noun, VBP and VBG are labels of verbs in the present tense, IN labels a preposition, and PRP a personal pronoun. Similarly, (s)he looks for the Swedish tags and finds out that NCUPN@DS describes a definite noun in plural form and nominative case, V@IPAS describes an active verb in the present tense, SPS labels a preposition, and PF@OPO@S describes a definite pronoun object in plural form. This gives the aligner additional clues about possible links (S)he might expect relations between active verbs in the present tense rather than between verbs and nouns. Finding out that Swedish nouns can bear the feature of definiteness gives the aligner another clue about the translation of the definite article in the English sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Finally, the aligner looks at the output of the shallow parser and gets additional clues for aligning the two sentences. For example, the two English verbs build a verb phrase (VP) which is most likely to be linked to the only \"verb cluster\" (VC) in the Swedish sentence. The personal pronoun in the English sentence is used as a noun phrase (NP) similar to the pronoun in the Swedish sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Putting all the clues together, the aligner comes up with the following alignment without actually having to know the two languages:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "the corridors korridorerna are jumping myllrar with av them dem However, looking at the sentence pair again, a second aligner with knowledge of both languages might realize that the verbs myllrar (English: swarm) and jumping do not really correspond to each other in isolation and that the expressions are rather idiomatic in both languages. Therefore, the second aligner might decide to link the whole expression \"are jumping with\" to the Swedish translation of \"myllrar av\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "This kind of disagreement between human aligners is quite normal and demonstrates quite well the problems which have to be handled by automatic alignment approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Now, we would like to use a similar strategy as described in the previous section for an automatic alignment process. In our approach, we use the following definitions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Word alignment clue: A word alignment clue C,(s, t) is a probability which indicates an association between two lexical items s and t in parallel texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Lexical item: A lexical item is a set of words with associated features attached to it (word position may be a feature).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "A clue is called static if its value is constant for a given pair of features of lexical items, otherwise it is called dynamic Furthermore, clues can be declarative, i.e. pre-defined feature correspondences, or estimated, i.e. from association scores or from training data. Generally, a clue is defined as a weighted association A between s and t: C,(s, t) = P(a,) = w,A,(s, t) The value of w, is used to normalize and weight the association score A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Alignment clues can be estimated from association measures given empirical data. Examples of such measures are given below: Other clues can be estimated from word aligned training data:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Ci (8,t) = wi * p(ft lfs ) w . freq( ) 3 freq(fs )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "f, and ft are sets of features of s and t, respectively. They may include features such as part-ofspeech, phrase categories, word positions, and/or any other kind of contextual features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Clues can also be pre-defined. For example, machine-readable dictionary can be used as a collection of declarative clues. Each translation from the dictionary is an alignment clue for the corresponding word pairs. The likelihood of each translation alternative can be weighted, e.g., by frequency (if available).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "So far, word alignment clues are simply sets of weighted association scores. The key task is to combine available clues in order to find interlingual links. Clues are defined as probabilities of associations. In order to combine all indications which are given by single clues C, (s, t) = P(ai ) we define the overall clue Cat/ (s, t) for a given pair of lexical items as the disjunction of all indications:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clue Combinations", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Cau(s7 t) = P(aall) = P(a i U a2 U U a,\") Note that clues are not mutually exclusive. For example, an association based on co-occurrence measures can be found together with an association based on string similarity measures. Using the addition rule for probabilities we get the following formula for a disjunction of two clues: P(ai U a2) = P(ai) P(a2) -P(al n a2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clue Combinations", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "For simplicity, we assume that clues are independent of each other. P(ai n a2 ) = P(ai)P(a2) This is a crucial assumption and has to be considered when designing clue patterns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clue Combinations", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Word alignment clues may refer to any set of words from the source and target language segment according to the definitions in section 2.2. Therefore, clues can refer to sets of words which overlap with other sets of words to which another clue refers. Such overlaps and inclusions make it impossible to combine the corresponding clues directly with the formulas which were given in the previous section. In order to enable clue combi-nations even for overlapping units, we define the following property of word alignment clues:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overlaps, Inclusions and the Clue Matrix", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "A clue indicates an association between all its member token pairs. This property makes it possible to combine alignment clues by distributing the clue indication from complex structures to single word pairs. In this way, dynamic tokenization can be used for both, source and target language sentences and combined association scores (the total clue value) can be calculated for each pair of single tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overlaps, Inclusions and the Clue Matrix", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Now, sentence pairs can be represented in a two-dimensional matrix with one source language word per row and one target language word per column. The cells inside the matrix can be filled with the combined clue values for the corresponding word pairs. Henceforth, this matrix will be referred to as a clue matrix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overlaps, Inclusions and the Clue Matrix", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Consider the following English/Swedish sentence pair:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "Then hand baggage is opened. Sedan Oppnas handbagaget.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "Assume that the alignment program found the following alignment clues which are based on string similarity and co-occurrence statistics: 2 The alignment clues contain only three multiword units. However, even these few units cause several overlaps. For example, the English string \"hand baggage\" from the set of string similarity clues overlaps with the string \"baggage\". The clue for the pair \"is opened\" and \"sedan Oppnas\" overlaps with six other clues. However, using our 2 Note that clues do not have to be correct! Alignment clues give hints for a possible relation between words and phrases. They can even be misleading, but hopefully, the indication of combined clues will lead to correct links. The matrix is simply filled with all values of combined clues for each word pair. For example, the total clue value for the word pair s =\"baggage\" and t =\"handbagaget\" is calculated as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "Cau (s, t) = 0.45+0.83 -0.45*0.83 = 0.9065 All other values are computed in the same way. Looking at the matrix, we can find clear relations between certain words such as [hand,baggage] and handbagaget. However, between other word pairs such as is and sedan we find only low associations which conflict with others and therefore, they can be dismissed in the alignment process.", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 185, |
|
"text": "[hand,baggage]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "Word alignment clues as described above can be used to model the relations between words of translated texts. Parameters of this model can be collected in a clue matrix as introduced in section 2.4. The final task is now to recover the actual alignment of words and phrases from the text using the parameters in the clue matrix. This can be formulated as a search task in which one tries to find the optimal alignment using possible links between words and phrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clue Alignment", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "It is important for our purposes to allow multiple links from each word (source and target) to corresponding words in the other language in order to obtain phrasal links We say that a wordto-word link overlaps with another one if both of them refer to either the same source or the same target language word. Sets of overlapping links form link clusters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clue Alignment", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Phrasal links cause alignments with varying numbers of linked items which have to be compared. We use the following dynamic procedure in order to approach an optimal alignment:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clue Alignment", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "1. Find the best link in the clue matrix, i.e. find the word-to-word relation with the highest value in the matrix to zero.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clue Alignment", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "2. Check for overlaps: If the link overlaps with other links from more than one accepted link cluster continue with 1. If the link overlaps with another accepted link but the nonoverlapping tokens are not next to each other in the text continue with 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clue Alignment", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "3. Add the link to the set of accepted link clusters and continue with 1 until no more links are found (or the best link is below a certain threshold)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clue Alignment", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The algorithm is very simple and may miss the optimal alignment. However, it is a very efficient way of extracting links according to their association clues. Experiments, which are presented further down, show promising results. The crucial point of the algorithm is the attachment of links to existing link clusters. The algorithm restricts clusters to pairs of contiguous word sequences in order to reduce the number of malformed phrases in extracted links. A better way would be to use proper language models to do this job. Another possibility is to use the syntactic structures from a (shallow) parser as prior knowledge. A simple modification of the algorithm above would be to accept overlapping links only if they do not cross phrase borders according to the syntactic analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clue Alignment", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In section 2.2, we pointed out that clues can be estimated from aligned training material. This allows us to infer new clues from previous links by estimating conditional probabilities. For this, we assume that previous links are correct and can be used for probability estimations. This is not true in general. However, we hope to find additional links with sufficient accuracy from these clues. In other words, we expect clues, which have been found via \"self-learning\" techniques to increase the recall with an acceptable increase of noise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bootstrapping Clue Alignment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Previous links point to the context from which they originated. Therefore, we can access any pair of features which is available for the context as well as for the linked items themselves. In this way, clue probabilities can be based on any combination of features of linked items and their context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bootstrapping Clue Alignment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "A simple example is to use part-of-speech (PUS) tags as a feature of lexical items. Using this feature, we can estimate the probabilities of source language items with certain PUS-tags to be linked to target language items with certain other PUS-tags.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bootstrapping Clue Alignment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Consider the following example: An English/Swedish bitext has been aligned with some basic clue patterns using the clue alignment approach. Now, we assume that pairs of PUS labels can give us additional clues about possible links A new clue is, e.g., a conditional probability of a sequence of POS labels of linked items given the PUS labels of the items they were linked to.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bootstrapping Clue Alignment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Applying learned clues (such as the PUS clue from above) on their own would probably be misleading in many cases. However, they add valuable information in combination with others. In our experiments, we applied the following feature patterns for learning clues:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bootstrapping Clue Alignment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "POS full: Label sequences as described above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bootstrapping Clue Alignment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "POS coarse: Label sequences as in POS full but with a reduced tag set for Swedish (done by simply cutting the label after two characters)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bootstrapping Clue Alignment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Phrase: Phrase type labels which have been produced by (shallow) parsers", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bootstrapping Clue Alignment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The position of the translations in the target language segment relative to the position of the original in the source language segment. 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Position:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The following matrix was produced for the sentence pair from section 2.1 using all the learned alignment clues from above. 'The position clue is more of a weight than a clue. It favors common position distances of links by giving them a higher value. This somehow assumes that translations are more likely to be found close to each other than far away in terms of word position. 4 Each clue has been normalized with a uniform weight of 0.5 except for the position clue which was weighted with a value of 0.1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 379, |
|
"end": 380, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Position:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The numbers in bold refer to the link clusters which would have been extracted using the clue alignment procedure from section 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Position:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We applied the \"clue aligner\" to one of our parallel corpora from the PLUG project (S\u00e4gvall Hein, 1999) , a novel by Saul Bellow \"To Jerusalem and back: a personal account\" with about 170,000 words in English and Swedish.", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 103, |
|
"text": "(S\u00e4gvall Hein, 1999)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The English portion of the corpus has been tagged automatically with PUS tags by the English maximum entropy tagger in the open-source software package Grok (Baldridge, 2002) . The same package was used for shallow parsing of the English sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 174, |
|
"text": "(Baldridge, 2002)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The Swedish portion was tagged by the Ngrambased TnT-tagger (Brants, 2000) which was trained for Swedish on the SUC corpus (Megyesi, 2001 ). Furthermore, we used a rule-based analyzer for syntactic parsing (Megyesi, 2002) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 74, |
|
"text": "(Brants, 2000)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 123, |
|
"end": 137, |
|
"text": "(Megyesi, 2001", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 221, |
|
"text": "(Megyesi, 2002)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Our basic alignment applies two association clues: the Dice coefficient and the longest common subsequence ratio (LCSR). Both clues have been weighted uniformly with a value of 0.5. The threshold for the Dice coefficient has been set to 0.3 and the minimal co-occurrence frequency to 2. The threshold of LCSR scores has been set to 0.4 and the minimal token length to 3 characters. 5 We certainly wanted to test the ability of finding phrasal links Therefore, both association clues have been calculated for pairs of multi-word units (MWUs). MWUs may overlap with others or may be included in other MWUs. We used two different approaches in order to select appropriate MWUs: N-grams: word bigrams and trigrams + simple language filters (stop word lists) to find common phrase borders.", |
|
"cite_spans": [ |
|
{ |
|
"start": 382, |
|
"end": 383, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Chunks phrases which have been marked by a shallow parser for English and a rule-based parser for Swedish.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Both sets of MWUs can also be combined.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Furthermore, we were interested in the ability of the algorithm of learning new clues from previously aligned links as discussed in 4. We applied all the clue patterns which where introduced at the end of section 4: POS full, POS coarse, Phrase, Position. POS clues have been normalized with a weight of 0.5. Relative position and phrase type labels bear much less information about specific words and phrases than POS tags, therefore, a lower weight of 0.1 was chosen for these two clues.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For the evaluation, we used an existing gold standard which was produced within the PLUG project (Ahrenberg et al., 1999) . The gold standard consists of 500 randomly sampled items which have been aligned manually according to detailed guidelines (Merkel, 1999) . Results are measured using fine-grained metrics for precision and recall (Ahrenberg et al., 2000) and a balanced Fmeasure.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 121, |
|
"text": "(Ahrenberg et al., 1999)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 261, |
|
"text": "(Merkel, 1999)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 361, |
|
"text": "(Ahrenberg et al., 2000)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The following The values in the table above show a clear improvement, mainly in recall, of the results when additional clues have been applied. The impact of POS clues follows intuitive expectations. A smaller tagset makes the clues more general and the recall value is increased while the precision drops significantly. However, the position clue changed the result beyond all expectations. Not only did recall go up significantly, even the precision was increased. This proves that relations between word positions are important in aligning Swedish and English. Position distance weights (implicitly implemented as a position clue) seem to improve the choice between competing link alternatives. The effect of similar clues on less re-lated languages is necessary in order to evaluate their general quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The impact of the MWU approach was investigated as well. The following table compares results of the two different approaches as well as the combined approach. The \"chunk\" approach is best in all categories. The lower values for precision for the other two approaches could be expected. However, the low recall value for the combined approach is a surprise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this paper, a word alignment approach has been introduced which is based on a combination of association clues. The algorithm supports a dynamic tokenization of parallel texts which enables the alignment system to combine relations of overlapping pairs of lexical items (words and phrases).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Alignment clues can be estimated from common association criteria such as co-occurrence statistics and string similarity measures. Clues can also be learned from pre-aligned training data. It has been demonstrated how self-learning techniques can be used for learning additional alignment clues from previous alignments. Clues are defined as probabilities which indicate an association between lexical items according to some of their features. This definition is very flexible as features can represent any kind of information that is available for each item and its context. An important advantage of the clue alignment algorithm is the possibility of combining association scores. In this way, any number of clues can be included in the alignment process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The alignment experiments, which were presented in this paper, demonstrate the combination of two basic clue types (based on co-occurrence and string similarity) with four additional clue types (based on PUS labels, chunk labels, and relative word positions) which were learned during the alignment process. The alignment experiments on a parallel English/Swedish corpus showed significant improvements of the results when additional clues were added to the basic settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The clue alignment approach is very flexible and can easily be adapted to a new domain, language pair, and a different set of clues. Clue patterns can be defined depending on the information which is available (POS tags, phrase types, semantic tags, named entity markup, dictionaries etc.). However, clue patterns have to be designed carefully as they can be misleading. Word alignment is a real-life problem: We are looking for links in the complex world of parallel corpora and we need good clues in order to find them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "0f course, the aligner should be aware of the possibilities of false friends especially among short words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Weights and threshold have been chosen intuitively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The values refer to experiments with the \"chunk\" approach for MWU selection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A simple hybrid aligner for generating lexical correspondences in parallel texts", |
|
"authors": [ |
|
{ |
|
"first": "Lars", |
|
"middle": [], |
|
"last": "Ahrenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Magnus", |
|
"middle": [], |
|
"last": "Merkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikael", |
|
"middle": [ |
|
"Andersson" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "29--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lars Ahrenberg, Magnus Merkel, and Mikael Anders- son. 1998. A simple hybrid aligner for generating lexical correspondences in parallel texts. In Pro- ceedings of the 36th Annual Meeting of the Associa- tion for Computational Linguistics and the 17th In- ternational Conference on Computational Linguis- tics, pages 29-35, Montreal, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Evaluation of LWA and UWA", |
|
"authors": [ |
|
{ |
|
"first": "Lars", |
|
"middle": [], |
|
"last": "Ahrenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Magnus", |
|
"middle": [], |
|
"last": "Merkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [ |
|
"Sagvall" |
|
], |
|
"last": "Hein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jorg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lars Ahrenberg, Magnus Merkel, Anna Sagvall Hein, and JOrg Tiedemann. 1999. Evaluation of LWA and UWA. Technical Report 15, Department of Linguis- tics, University of Uppsala.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Evaluation of word alignment systems", |
|
"authors": [ |
|
{ |
|
"first": "Lars", |
|
"middle": [], |
|
"last": "Ahrenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Magnus", |
|
"middle": [], |
|
"last": "Merkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [ |
|
"Sagvall" |
|
], |
|
"last": "Hein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jorg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 2nd International Conference on Language Resources and Evaluation, LREC-2000. European Language Resources Association", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lars Ahrenberg, Magnus Merkel, Anna Sagvall Hein, and JOrg Tiedemann. 2000. Evaluation of word alignment systems. In Proceedings of the 2nd In- ternational Conference on Language Resources and Evaluation, LREC-2000. European Language Re- sources Association.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A system for incremental and interactive word linking", |
|
"authors": [ |
|
{ |
|
"first": "Lars", |
|
"middle": [], |
|
"last": "Ahrenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Magnus", |
|
"middle": [], |
|
"last": "Merkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikael", |
|
"middle": [], |
|
"last": "Andersson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings from The Third International Conference on Language Resources and Evaluation (LREC-2002)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "485--490", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lars Ahrenberg, Magnus Merkel, and Mikael Anders- son. 2002. A system for incremental and interactive word linking. In Proceedings from The Third In- ternational Conference on Language Resources and Evaluation (LREC-2002), pages 485-490, Las Pal- mas, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "GROK -an open source natural language processing library", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Baldridge. 2002. GROK -an open source natural language processing library. http://grok.sourceforge.net/.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "TnT -a statistical part-ofspeech tagger", |
|
"authors": [ |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Brants", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the Sixth Applied Natural Language Processing Conference ANLP-2000", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thorsten Brants. 2000. TnT -a statistical part-of- speech tagger. In Proceedings of the Sixth Applied Natural Language Processing Conference ANLP- 2000, Seattle, Washington.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Multilingual domain modeling in Twenty-One: Automatic creation of a bidirectional translation lexicon from a parallel corpus", |
|
"authors": [ |
|
{ |
|
"first": "Djoerd", |
|
"middle": [], |
|
"last": "Hiemstra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the eighth CLIN meeting", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--58", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Djoerd Hiemstra. 1998. Multilingual domain mod- eling in Twenty-One: Automatic creation of a bi- directional translation lexicon from a parallel cor- pus. In Peter-Arno Coppen, Hans van Halteren, and Lisanne Teunissen, editors, Proceedings of the eighth CLIN meeting, pages 41-58.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Comparing data-driven learning algorithms for POS tagging of swedish", |
|
"authors": [ |
|
{ |
|
"first": "Beata", |
|
"middle": [], |
|
"last": "Megyesi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "151--158", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beata Megyesi. 2001. Comparing data-driven learning algorithms for POS tagging of swedish. In Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing (EMNLP 2001), pages 151-158, Carnegie Mellon University, Pittsburgh, PA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Shallow parsing with POS taggers and linguistic features", |
|
"authors": [ |
|
{ |
|
"first": "Beata", |
|
"middle": [], |
|
"last": "Megyesi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Journal of Machine Learning Research: Special Issue on Shallow Parsing", |
|
"volume": "", |
|
"issue": "2", |
|
"pages": "639--668", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beata Megyesi. 2002. Shallow parsing with POS taggers and linguistic features. Journal of Machine Learning Research: Special Issue on Shallow Pars- ing, (2):639-668.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Automatic evaluation and uniform filter cascades for inducing N-best translation lexicons", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Melamed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the 3rd Workshop on Very Large Corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. Dan Melamed. 1995. Automatic evaluation and uni- form filter cascades for inducing N-best translation lexicons. In Proceedings of the 3rd Workshop on Very Large Corpora, Boston/Massachusetts.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Automatic discovery of noncompositional compounds in parallel data", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Melamed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. Dan Melamed. 1997. Automatic discovery of non- compositional compounds in parallel data. In Pro- ceedings of the 2nd Conference on Empirical Meth- ods in Natural Language Processing, Providence.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Annotation style guide for the Blinker project", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Melamed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. Dan Melamed. 1998. Annotation style guide for the Blinker project, version 1.0. IRCS Technical Report 98-06, University of Pennsylvania, Philadelphia.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Annotation style guide for the PLUG link annotator", |
|
"authors": [ |
|
{ |
|
"first": "Magnus", |
|
"middle": [], |
|
"last": "Merkel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Magnus Merkel. 1999. Annotation style guide for the PLUG link annotator. Technical report, Linkoping University, Linkoping.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Improved statistical alignment models", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "440--447", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting of the Association for Compu- tational Linguistics, pages 440-447.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "The PLUG Project: Parallel corpora in Linkoping, Uppsala, and Goteborg: Aims and achievements", |
|
"authors": [ |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Sagvall Hein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anna Sagvall Hein. 1999. The PLUG Project: Parallel corpora in Linkoping, Uppsala, and Goteborg: Aims and achievements. Technical Report 16, Department of Linguistics, University of Uppsala.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Translating collocations for bilingual lexicons: A statistical approach", |
|
"authors": [ |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Smadja", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vasileios", |
|
"middle": [], |
|
"last": "Hatzivassiloglou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Computational Linguistics", |
|
"volume": "22", |
|
"issue": "1", |
|
"pages": "1--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frank Smadja, Kathleen R. McKeown, and Vasileios Hatzivassiloglou. 1996. Translating collocations for bilingual lexicons: A statistical approach. Compu- tational Linguistics, 22(1), pages 1-38.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Word alignment -step by step", |
|
"authors": [ |
|
{ |
|
"first": "Jorg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 12th Nordic Conference on Computational Linguistics NODALIDA99", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "216--227", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "JOrg Tiedemann. 1999. Word alignment -step by step. In Proceedings of the 12th Nordic Conference on Computational Linguistics NODALIDA99, pages 216-227, University of Trondheim, Norway.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Lexical token alignment: Experiments, results and applications", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Tufis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ana-Maria", |
|
"middle": [], |
|
"last": "Barbu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings from The Third International Conference on Language Resources and Evaluation (LREC-2002)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "458--465", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Tufis and Ana-Maria Barbu. 2002. Lexical to- ken alignment: Experiments, results and applica- tions. In Proceedings from The Third International Conference on Language Resources and Evaluation (LREC-2002), pages 458-465, Las Palmas, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Verbmobil: Foundations of Speech-to-Speech Translation, chapter Statistical Methods for Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Vogel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [ |
|
"Josef" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Tillmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sonja", |
|
"middle": [], |
|
"last": "Niel3en", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hassan", |
|
"middle": [], |
|
"last": "Sawaf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephan Vogel, Franz Josef Och, Christoph Tillmann, Sonja Niel3en, Hassan Sawaf, and Hermann Ney, 2000. Verbmobil: Foundations of Speech-to-Speech Translation, chapter Statistical Methods for Ma- chine Translation. Springer Verlag, Berlin.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": ": ADice(s, t) = p(s)+P(t) (the Dice coefficient)String similarity: ALCSR(S,t) = LCSR(S,t) (the longest common subsequence ratio)", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF4": { |
|
"text": "", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td/><td>the results of</td></tr><tr><td colspan=\"4\">some alignment experiments using different sets</td></tr><tr><td colspan=\"2\">of clues (all values in %): 6</td><td/><td/></tr><tr><td>adding clues</td><td>precision</td><td>recall</td><td>F</td></tr><tr><td>basic</td><td>79.737</td><td>41.695</td><td>54.757</td></tr><tr><td>+ POS full</td><td>74.311</td><td>49.554</td><td>59.458</td></tr><tr><td>+ POS coarse</td><td>69.854</td><td>57.279</td><td>62.945</td></tr><tr><td>+ phrase</td><td>78.289</td><td>45.122</td><td>57.249</td></tr><tr><td>+ position</td><td>81.939</td><td>44.703</td><td>57.847</td></tr><tr><td>+ all</td><td>74.749</td><td colspan=\"2\">63.730 68.801</td></tr></table>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |