Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C98-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:29:56.623769Z"
},
"title": "A Simple Hybrid Aligner for Generating Lexical Correspondences in Parallel Texts",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Ahrenberg",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Link6ping University",
"location": {
"postCode": "S-581 83",
"settlement": "Link6ping",
"country": "Sweden"
}
},
"email": ""
},
{
"first": "Mikael",
"middle": [],
"last": "Andersson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Link6ping University",
"location": {
"postCode": "S-581 83",
"settlement": "Link6ping",
"country": "Sweden"
}
},
"email": ""
},
{
"first": "Magnus",
"middle": [],
"last": "Merkel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Link6ping University",
"location": {
"postCode": "S-581 83",
"settlement": "Link6ping",
"country": "Sweden"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an algorithm for bilingual word alignment that extends previous work by treating multi-word candidates on a par with single words, and combining some simple assumptions about the translation process to capture alignments for low frequency words. As most other alignment algorithms it uses cooccurrence statistics as a basis, but differs in the assumptions it makes about the translation process. The algorithm has been implemented in a modular system that allows the user to experiment with different combinations and variants of these assumptions. We give performance results from two ewfiuations, which compare well with results reported in the literature.",
"pdf_parse": {
"paper_id": "C98-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an algorithm for bilingual word alignment that extends previous work by treating multi-word candidates on a par with single words, and combining some simple assumptions about the translation process to capture alignments for low frequency words. As most other alignment algorithms it uses cooccurrence statistics as a basis, but differs in the assumptions it makes about the translation process. The algorithm has been implemented in a modular system that allows the user to experiment with different combinations and variants of these assumptions. We give performance results from two ewfiuations, which compare well with results reported in the literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years much progress have been made in the area of bilingual alignment for the support of tasks such as machine translation, machine-aided translation, bilingual lexicography and terminology. For instance, Melamed (1997a) reports that his word-to-word model for translational equivalence produced lexicon entries with 99% precision and 46% recall when trained on 13 million words of the Hansard corpus, where recall was measured as the fraction of words from the bitext that were assigned some translation. Using the same model but less data, a French/English software manual of 400,000 words, Resnik and Melamed (1997) reported 94% precision with 30% recall.",
"cite_spans": [
{
"start": 215,
"end": 230,
"text": "Melamed (1997a)",
"ref_id": null
},
{
"start": 603,
"end": 628,
"text": "Resnik and Melamed (1997)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "While these figures are indeed impressive, more telling figures can only be obtained by measuring the effect of the alignment system on some specific task. Dagan and Church (1994) reports that their Termight system helped double the speed at which terminology lists could be compiled at the AT&T Business Translation Services.",
"cite_spans": [
{
"start": 156,
"end": 179,
"text": "Dagan and Church (1994)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "It is also clear that the usability of bilingual concordances would be greatly improved if the system could indicate both items of a translation pair and if phrases could be looked up with the same ease anti precision as single words (Macklovitch and Hannah 1996) .",
"cite_spans": [
{
"start": 234,
"end": 263,
"text": "(Macklovitch and Hannah 1996)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "For the language pairs that are of particular interest to us, English vs. other Germanic languages, the ability to handle multi-word units adequately is crucial (cf. Jones and Alexa 1997) .",
"cite_spans": [
{
"start": 166,
"end": 187,
"text": "Jones and Alexa 1997)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "In English a large number of technical terms are multi-word compounds, while the corresponding terms in other Germanic languages are often single-word compounds. We illustrate with a few examples from an EnglislgSwedish computer manual: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Swedish file mana~ filhantcrare network server niitverksscrver ~~ opcrativsystcm semi2 directou -installationskatalog Also, many common adverbials and prepositions are multi-word units, which may or may not be translated as such. The problem we consider is how to find word and phrase alignments for a bitext that is already aligned at the sentence level. Results should be delivered in a form that could easily be checked and corrected by a human user. Although we primarily use the system for bitexts with an English and a Scandinavian hall the system should preferably be useful for many different language pairs. Thus we don't rely on the existence of POS-taggers or lemmatizers for the languages involved, but wish to provide mechanisms that a user can easily adapt to new languages. The organisation of the paper is as follows: In section 2 we relate this approach to previous work, in section 3 we motivate and spell out our assumptions about the behaviour of lexical units in translation, in section 4 we present the basic features of the algorithm, and in section 5 we present results from an evaluation and try to compare these to the results of others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "Most algorithms for bilingual word alignment to date have been based on the probabilistic translation models first proposed by Brown et al. (1988 Brown et al. ( , 1990 , especially Model 1 and Model 2. These models explicitly exclude multi-word units from consideration 1. Melamed (1997b) , however, proposes a method for the recognition of multiword compounds in bitexts that is based on the predictive value of a translation model. A trial translation model that treat certain multi-word sequences as units is compared with a base translation model that treats the same sequences as multiple single-word units.",
"cite_spans": [
{
"start": 127,
"end": 145,
"text": "Brown et al. (1988",
"ref_id": "BIBREF0"
},
{
"start": 146,
"end": 167,
"text": "Brown et al. ( , 1990",
"ref_id": "BIBREF1"
},
{
"start": 273,
"end": 288,
"text": "Melamed (1997b)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2."
},
{
"text": "A drawback with Melamed's method is that compounds are defined relative to a given translation and not with respect to languageinternal criteria. Thus, if the method is used to construct a bilingual concordance, there is a risk that compounds and idioms that translate compositionally will not be found. Moreover, it is computationally expensive and, since it constructs compounds incrementally, adding one word at a time, requires many iterations and much processing to find linguistic units of the proper size. Kitamura and Matsumoto (1996) present results from aligning multi-word and single word expressions with a recall of 80 per cent if partially correct translations were included. Their method is iterative and is based on the use of the Dice coefficient. Smadja et. al (1996) also use the Dice i Model 3-5 includes multi-word units in one direction. coefficient as their basis for aligning collocations between English and French. Their evaluation show results of 73 per cent accuracy (precision) on average.",
"cite_spans": [
{
"start": 513,
"end": 542,
"text": "Kitamura and Matsumoto (1996)",
"ref_id": "BIBREF5"
},
{
"start": 765,
"end": 785,
"text": "Smadja et. al (1996)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2."
},
{
"text": "As Fung and Church (1994) we wish to estimate the bilingual lexicon directly. Unlike Fung and Church our texts are already aligned at sentence lcvel and the lexicon is viewed, not merely as word associations, but as associations between lexical units of the two languages.",
"cite_spans": [
{
"start": 3,
"end": 25,
"text": "Fung and Church (1994)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Underlying assumptions",
"sec_num": "3."
},
{
"text": "We assume that texts have structure at many different levels. At the most concrete level a text is simply a sequence of characters. At the next level a text is a sequence of word tokens, where word tokens are defined as sequences of alphanumeric character strings that are separated from one another by a finite set of delimiters such as spaces and punctuation marks. While many characters can be used either as word delimiters or as nondelimiters, we prefer to uphold a consistent difference between delimiters and non-delimiters, for the ease of implementation that it allows. At the same time, however, the tokenizer recognizes common abbreviations with internal punctuation marks and regularizes clitics to words (e.g. can't is regularized to can not).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Underlying assumptions",
"sec_num": "3."
},
{
"text": "At the next level up a text can be viewed as a partially ordered bag of lexical units. It is a bag because the same unit can occur several times in a single sentence. It is partially ordered because a lexical unit may extend across other lexical units, as in He turned the offer down. Tabs were kept on him.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Underlying assumptions",
"sec_num": "3."
},
{
"text": "We say that words express lexical units, and that units are expressed by words. A unit may be expressed by a multi-word sequence, while a given word can express at most one lexical unit. 2 It is often hard to tell the difference between a lexical unit and a lexical complex. We assume that 2 This latter assumption is actually too strict for Germanic languages where morphological compounding is a productive process, but we make it nevertheless, as we have no means too identify compounds reliably. Moreover, the borderline between a lexicalized compound and a compositional compound is hard to draw consistently, anyway. recurrent collocations that pass certain structural and contextual tests are candidate expressions for lexical units. If such collocations are found to correspond to something in the other half of the bitext on the basis of co-occurrence measures, they are regarded as expressions of lexical units. This will include compound names such as New York', 'ttenry Kissinger' and 'World War 1I', and compound terms such as 'network server directory'. Thus, as with the compositional compounds just discussed, we prefer high recall to high precision in identifying multi-word units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Underlying assumptions",
"sec_num": "3."
},
{
"text": "The expressions of a lexical unit form an equivalence class. An equivalence class for a single-word unit includes its morphological variants. An equivalence class for a multi-word unit should include syntactic variants as well. For instance, the lexical unit turn down should include 'turned down' 'turning down' as well as expressions where the particle is separated from the verb by some appropriate phrase, as in the example above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Underlying assumptions",
"sec_num": "3."
},
{
"text": "The current system, though, does not provide for syntactic variants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Underlying assumptions",
"sec_num": "3."
},
{
"text": "Our aim is to establish relations not only between corresponding words and word sequences in the bitext, but also between corresponding lexical units. A problem is then that the algorithm cannot recognize lexical units directly, but only their expressions. It helps to include lexical units in the underlying model, however, as they have explanatory value. Moreover, the algorithm can be made to deliver its output in the form of correspondences between equivalence classes of expressions belonging to the same lexical unit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Underlying assumptions",
"sec_num": "3."
},
{
"text": "For the purpose of generating the alignment and the dictionary we divide the lexical units into three classes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Underlying assumptions",
"sec_num": "3."
},
{
"text": "The same categories apply to expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "irrelevant units, 2. closed class units, 3. open class units",
"sec_num": "1."
},
{
"text": "Irrelevant units are simply those that we don't want to include. They have to be listed explicitly. The reason for not including some items may vary with the purpose of alignment. Even if we wish the alignment to be as complete as possible, it might be useful to exclude certain units that we suspect may confuse the algorithm. For instance, the dosupport found in English usually has no counterpart in other languages. Thus, the different fonns of 'do' may be excluded from consideration from the start.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "irrelevant units, 2. closed class units, 3. open class units",
"sec_num": "1."
},
{
"text": "As for the translation relation we make the following assumptions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "irrelevant units, 2. closed class units, 3. open class units",
"sec_num": "1."
},
{
"text": "1. A lexical unit in one half of the bitext corresponds to at most one lexical unit in the other half. This can be seen as a generalization of the one-to-one assumption for word-to-word translation used by Melamed (1997a) and is exploited for the same purpose, i.e. to exclude large numbers of candidate alignments, when good initial alignments have been found.",
"cite_spans": [
{
"start": 206,
"end": 221,
"text": "Melamed (1997a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "irrelevant units, 2. closed class units, 3. open class units",
"sec_num": "1."
},
{
"text": "2. Open class and closed class lexical units are usually translated and there are a limited number of lexical units in the other language that are commonly used to translate them. While deliberately vague this assumption is what motivates our search for frequent pairs <source expression, target expression> with high mutual information. It also motivates our choice of regarding additions and deletions of lexical units in translation as haphazard apart from the case of a restricted set of irrelevant units that we assume can be known in advance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "irrelevant units, 2. closed class units, 3. open class units",
"sec_num": "1."
},
{
"text": "3. Open class units can only be aligned with open class units, and closed class units can only be aligned with closed class units. This assumption seems generally correct and has the effect of reducing the number of candidate alignments significantly. Closed class units have to be listed explicitly. The assumption is that we know the two languages sufficiently well to be able to come up with an appropriate list of closed class units and expressions. Multi-word closed class units are listed separately. Closed class units can be further classified for the purposes of alignment (see below).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "irrelevant units, 2. closed class units, 3. open class units",
"sec_num": "1."
},
{
"text": "4. If some expression for the lexical unit Us is found corresponding to some expression for the lexical unit Us, then assume that arty expression of Us may correspond to any expression of U-r. This assumption is in accordance with the often made observation that morphological properties are not invariants in translation. It is used to make the algorithm more greedy by accepting infrequent alignments that are morphological vm'iants of high-rating ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "irrelevant units, 2. closed class units, 3. open class units",
"sec_num": "1."
},
{
"text": "5. If one half of an aligned sentence pair is the expression of a single lexical unit, then assume that the other half is also. This is definitely a heuristic, but it has been shown to be very useful for technical texts involving English and Scandinavian, where terms are often found in lists or table cells (Tiedemann 1997) . This heuristic is useful for finding alignments regardless of frequencies.",
"cite_spans": [
{
"start": 308,
"end": 324,
"text": "(Tiedemann 1997)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "irrelevant units, 2. closed class units, 3. open class units",
"sec_num": "1."
},
{
"text": "Similarly, if there is only one non-aligned (relevant open class) word left in a partially aligned sentence, assume that it corresponds to the remaining (relevant open class) words of the corresponding sentence. 6. Position matters, i.e. while word order is not an invariant of translation it is not random either. We implement the contribution of position as a distribution of weights over the candidate pairs of expressions drawn from a given pair of sentences. Expressions that are close in relative position receive higher weights, while expressions that are far from each other receive lower weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "irrelevant units, 2. closed class units, 3. open class units",
"sec_num": "1."
},
{
"text": "A bitext aligned at the sentence level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input",
"sec_num": "4.1"
},
{
"text": "There ate two types of output data: (i) a table of link types in the form of a bilingual dictionary",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output",
"sec_num": "4.2"
},
{
"text": "where each entry has the form <<s,t ~ .... tn>, s being the source expression type and t ~ .... t\" the target expression types that were found to conespond to s; and (ii) a table of link instances <<s,t><i,j>> sorted by sentence pairs, where s is some expression from the source text, t is an expression from the translated text, and i and j are the (withinsentence) positions of the first word of s and t, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output",
"sec_num": "4.2"
},
{
"text": "Both halves of the bitext are regularized. When open class multi-word units are to be included, they are generated in a preprocessing stage for both the source and target texts and assembled in a table. For this purpose, we use the phrase extracting program described in Merkel et al. (1994) .",
"cite_spans": [
{
"start": 271,
"end": 291,
"text": "Merkel et al. (1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.3"
},
{
"text": "The basic algorithm combines the K-vec approach, described by Fung and Church (1993) , with the greedy word-to-word algorithm of Melamed (1997a) .",
"cite_spans": [
{
"start": 62,
"end": 84,
"text": "Fung and Church (1993)",
"ref_id": null
},
{
"start": 129,
"end": 144,
"text": "Melamed (1997a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic operation",
"sec_num": "4.4"
},
{
"text": "In addition, open class expressions are handled separately from closed class expressions, and sentences consisting of a single expression are handled in the manner of Tiedemann (1997) .",
"cite_spans": [
{
"start": 167,
"end": 183,
"text": "Tiedemann (1997)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic operation",
"sec_num": "4.4"
},
{
"text": "The algorithm is iterative, repeating the same process of generating translation pairs from the bitext, and then reducing the bitext by removing the pairs that have been found before the next iteration starts. The algorithm will stop when no more pairs can be generated, or when a given number of iterations have been completed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic operation",
"sec_num": "4.4"
},
{
"text": "In each iteration, the following operations are performed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic operation",
"sec_num": "4.4"
},
{
"text": "(i) For each open class expression in the source half of the bitext (with frequency higher than 3), the open class expressions in corresponding sentences of the other half are ranked according to their likelihood as translations of the given source expression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic operation",
"sec_num": "4.4"
},
{
"text": "We estimate the probability that a candidate target expression is a translation by counting cooccurrences of the expressions within sentence pairs and overall occurrences in the bitext as a whole. Then the t-score, used by Fung and Church, is calculated, and the candidates are ranked on the basis of this value:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic operation",
"sec_num": "4.4"
},
{
"text": "In our case K is the number of sentence pairs in (V~., V,) the bitext. The target expression giving the highest t-score is selected as a translation provided the following two conditions are met: (a) this t-score is higher than a given threshold, and (b) the overall frequency of the pair is sufficiently high. (These are the same conditions that are used by Fung and Church.) This operation yields a list of translation pairs involving open class expressions.",
"cite_spans": [
{
"start": 49,
"end": 58,
"text": "(V~., V,)",
"ref_id": null
},
{
"start": 359,
"end": 376,
"text": "Fung and Church.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic operation",
"sec_num": "4.4"
},
{
"text": "prob(V.~,V,) -prob(V,) prob(V,) t= ~ l prob",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic operation",
"sec_num": "4.4"
},
{
"text": "(ii) The same as in (i) but this time with the closed class expressions. A difference from the previous stage is that only target candidates of the proper sub-category or sub-categories for the source expression are considered. Conjunctions and personal pronouns are for example specified for both the target and the source languages. This strategy helps to limit the search space when closed-class expressions are linked. (iv) When all (relevant) source expressions have been tried in this manner, a number of translation pairs have been obtained that are entered in the output table and then removed from the bitext. This will affect t-scores by reducing mariginal fi:equencies and will also cause fewer candidate pairs to be considered in the sequel. The reduced bitext is input for the next iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic operation",
"sec_num": "4.4"
},
{
"text": "The basic algorithm is enhanced by a number of modules that can be combined freely by the user. \"Ihese modules are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variants",
"sec_num": "4.5"
},
{
"text": "\u2022 a morphological module that groups expressions that are identical modulo specified sets of suffices;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variants",
"sec_num": "4.5"
},
{
"text": "\u2022 a weight module that affects the likelihood of a candidate translation according to its position in the sentence;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variants",
"sec_num": "4.5"
},
{
"text": "\u2022 a phrase module that includes multi-word expressions generated in the pre-processing stage as candidate expressions for alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variants",
"sec_num": "4.5"
},
{
"text": "The morphological module collects open class translation pairs that are similar to the ones that are found by the basic algorithm. More precisely, if the pair (X, Y) has been generated as a translation pair in some iteration, other candidate pairs with X as the first element are searched. A pair (X, Z) is considered to be a translation pair iff there exist strings W, F and G such that Y= WE; Z=WG and F and G have been defined as different suffices of the same paradigm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The morphological module",
"sec_num": "4.5.1"
},
{
"text": "The data needed for this module consists of simple suffix lists for regular paradigms of the languages involved. For example, [0, s, ed, ing] is a suffix list for regular English verbs. They have to be defined by the user in advance.",
"cite_spans": [
{
"start": 126,
"end": 141,
"text": "[0, s, ed, ing]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The morphological module",
"sec_num": "4.5.1"
},
{
"text": "When the morphological module is used, it is possible to reverse the direction of the linking process at a certain stage. After each iteration of linking expressions from source to target, the different inflectional variants of the target word are used as input data and these candidates are then linked from target to source. This strategy makes it possible to link low-frequency source expressions belonging to the same suffix paradigm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The morphological module",
"sec_num": "4.5.1"
},
{
"text": "The weight module",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.5.2",
"sec_num": null
},
{
"text": "The weight module distribute weights over the target expressions depending on their position relative to the given source expression. The weights must be provided by the user in the form of lists of numbers (greater than or equal to 0). The weight for a pair is caclulated as the sum of the weights for the instances of that pair. This weight is then used to adjust the co-occurrence probabilities by using the weight instead of the cooccurrence frequency as input to the the t-score formula. The threshold used is adjusted accordingly. In the current configuration of weights, the threshold is increased by 1. In the weight module it is possible to specify the maximal distance between a source and target expression measured as their relative position in the sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.5.2",
"sec_num": null
},
{
"text": "When the phrase module is invoked, multi-word expressions are also considered as potential elements of translation pairs. The multi-word expressions to be considered are generated in a special pre-processing phase and stored in a phrase table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The phrase module",
"sec_num": "4.5.3"
},
{
"text": "T-scores for candidate translation pairs involving multi-word expressions are calculated in the same way as for single words. When weights are used the weight of a multi-word expression is considered equal to that of its first word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The phrase module",
"sec_num": "4.5.3"
},
{
"text": "It can happen that the t-scores for two pairs <s f> and <s,t2>, where t t is a multi-word expression and t 2 is a word that is pmt of t ~, will be identical or almost identical. In this case we prefer the almost identical target multi-word expression over a single word candidate if it has a t-value over the threshold and is one of the top six target candidates. When a multi-word expression is found to be an element of a translation pair, the expressions that overlap with it, whether multiword or single-word expressions, are removed from the current agenda and not considered until the next iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The phrase module",
"sec_num": "4.5.3"
},
{
"text": "The algorithm was tested on two different texts; one novel (66,693 source words) and one computer program manual (169.779 source words) which both were translated from English into Swedish. The tests were run on a Sun UltraSparcl Workstation with 320 MB RAM and took 55 minutes for the novel and 4 and a half hour for the program manual.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5."
},
{
"text": "The tests were run with three different configurations on each text: (i) the baseline (B) configuration which is the t-score measure, (ii) all modules except the weights module (AM-W), but a linkdistance constraint was used and set to 10; and (iii) all modules (AM) including morphology, weights and phrases. The t-score threshold used was 1.65 for B and AM-W, and 2.7 for AM, the minimum frequency of source expression was set to 3. Closed-class expressions were linked in all configurations. In the baseline configuration no distinction was made between closed-class and open-class expressions. In the AM-W and AM tests the closed-class expressions were divided into different subcategories and at the end of each iteration the linking direction was reversed at the end of each of the six iterations which improves the chances of linking low frequency source expressions. The characteristics of the source texts used are shown in Table 3 . Table 4 . Results from two bitexts, using T-score W), and all modules (AM) Novel The novel contains a high number of low frequency words whereas the program manual contains a higher proportion of words that the algorithm acturally tested as the frequency threshold was set to 3.",
"cite_spans": [],
"ref_spans": [
{
"start": 932,
"end": 939,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 942,
"end": 949,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5."
},
{
"text": "The results from the tests are shown in Table 4 . The evaluation was done on an extract from the automatically produced dictionary. All expressions starting with the letters N, O and P were evaluated for all three configurations of each text.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5."
},
{
"text": "The results from the novel show that recall is almost tripled in the sample, from 234 in the B configuration to 709 linked source expressions with the AM configuration. Precision values for the novel lie in the range from 90.13 to 92.50 per cent when partial links are judged as errors and slightly higher if they are not. The use of weights seems to make precision somewhat lower for the novel which perhaps could be explained by the fact that the novel is a much more varied text type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5."
},
{
"text": "For the program manual the recall results are as good as for the novel (three times as many linked source types for the AM configuration compared to baseline). Precision is increased, but perhaps not only (B), all modules except the weights (AM- to the level we anticipated at first. Multi-word expressions are linked with a relatively high recall (above 70%), but the precision of these links are not as high as for single words. Our evaluations of the links show that one major problem lies in the quality of the multi-word expressions that are fed into the alignment program. As the program works iteratively and in the current version starts with the multi-word expressions, any errors at this stage will have consequences in later iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5."
},
{
"text": "We have run each module separately and observed that the addition of each module improves the baseline configuration by itself. To compare our results to those from other approaches is difficult. Not only are we dealing with different language pairs but also with different texts and text types. There is also the issue of different evaluation criteria. A pure wordto-word alignment cannot be compared to an approach where lexical units (both single word expressions and multi-word expressions) ,are linked. Neither can the combined approach be compared to a pure phrase alignment program because the aims of the alignment are different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5."
},
{
"text": "However, as far as we can judge given these difficulties, the results presented in this paper are on par with previous work for precision and possibly an improvement on recall because of how we handle low-frequency variants in the morphology module and by using the single-wordline strategy. The handling of closed-class e, xpressions have also been improved due to the division of these expressions into subcategories which limits the search space considerably.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5."
}
],
"back_matter": [
{
"text": "This work is part of the project \"Parallell corpora in Link6ping, Uppsala and G6teborg\" (PLUG), jointly funded by Nutek and HSFR under the Swedish National research programme in Language Technology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "% Statistical Approach to Language Translation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cocke",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pielra",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Roossin",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 12th International Conference on Computational Linguistics. Bu&/pest",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P.F., J. Cocke, S. Della Pietra, V. Della Pielra, F. Jelinek, R. Mercer, & P. Roossin. (1988)'% Statistical Approach to Language Translation.\" Proceedings of the 12th International Conference on Computational Linguistics. Bu&/pest.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Statistical Approach to Machine Translation",
"authors": [
{
"first": "P F",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cocke",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mincer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Roossin",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P F, J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, R. Mincer, & P. Roossin. (1990) \"A Statistical Approach to Machine Translation.\" Computational Linguistics 16(2).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Termight: Identifying and Translating Tectmical Terminology",
"authors": [
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "&",
"middle": [
"K W"
],
"last": "Church",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings from the Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dagan, I, & K. W. Church. (1994)'`Termight: Identifying and Translating Tectmical Terminology.\" Proceedings from the Conference on Applied Natural Language Processing; Stuttgart.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "K-vec: A New Approach for Aligning Pmallel Texts",
"authors": [
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "&",
"middle": [
"K W"
],
"last": "Church",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings from the 15th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fung, P, & K. W. Church. (1994) \"K-vec: A New Approach for Aligning Pmallel Texts.\" Proceedings from the 15th International Conference on Computational Linguistics, Kyoto.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Towards automatically aligning German comlx)unds with English word groups",
"authors": [
{
"first": "D: & M",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Alexa",
"suffix": ""
}
],
"year": 1997,
"venue": "New Methods in language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jones, D: & M. Alexa (1997) \"Towards automatically aligning German comlx)unds with English word groups.\" In New Methods in language Processing (eds. Jones D. & H. Somers). UCL Press, London.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic Extraction of Word Sequence Correspondences in Parallel Corpora",
"authors": [
{
"first": "M",
"middle": [
"& Y"
],
"last": "Kitamura",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Fot~lh Annual Wolkshop on Very Large Corpora (WVLC-4)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kitamura, M. & Y. Matsumoto (1996) \"Automatic Extraction of Word Sequence Correspondences in Parallel Corpora\". In Proceedings of the Fot~lh Annual Wolkshop on Very Large Corpora (WVLC-4), Copenhagen.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Line 'Era Up: Advances in Alignment Technology ,and Their Impact on Tr~mslation Support Tools",
"authors": [
{
"first": "E",
"middle": [],
"last": "Macklovitch",
"suffix": ""
},
{
"first": "Marie-Loiuse",
"middle": [],
"last": "Hannan",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Second Conference of the A~sociation for Machine Tram'lation hz the Americas",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Macklovitch, E., & Marie-Loiuse Hannan. (1996) \"Line 'Era Up: Advances in Alignment Technology ,and Their Impact on Tr~mslation Support Tools.\" In Proceedings of the Second Conference of the A~sociation for Machine Tram'lation hz the Americas, Montreal.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A Word-to-Word Model of Translational ~luivalence",
"authors": [
{
"first": "I",
"middle": [
"D"
],
"last": "Mel",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Conference of the Ass'ociation for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mel,'uned, I. D. (1997a) \"A Word-to-Word Model of Translational ~luivalence.\" Proceedings of the 35th Conference of the Ass'ociation for Computational Linguistics', Madrid.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic Discovery of Non-Compositional Comlx)unds in Parallel Data",
"authors": [
{
"first": "I",
"middle": [],
"last": "Melamed",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dan",
"suffix": ""
}
],
"year": 1997,
"venue": "Paper pre~nted at the 2nd Conference on Empirical Methods in Natm~ Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melamed, I. Dan. (1997b) \"Automatic Discovery of Non- Compositional Comlx)unds in Parallel Data.\" Paper pre~nted at the 2nd Conference on Empirical Methods in Natm~ Language Processing, Providence.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Phrase-Retriewtl System Bacw.A on Recurrence",
"authors": [
{
"first": "M",
"middle": [
"B"
],
"last": "Mcrkel",
"suffix": ""
},
{
"first": "&",
"middle": [
"L"
],
"last": "Nilsson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ahrenhcrg",
"suffix": ""
}
],
"year": 1994,
"venue": "15\"oceedings of the Second Annual Workshop on Very I~lrge Corpora (WVLC-2)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mcrkel, M. B. Nilsson, & L. Ahrenhcrg, (1994) \"A Phrase- Retriewtl System Bacw.A on Recurrence.\" In 15\"oceedings of the Second Annual Workshop on Very I~lrge Corpora (WVLC-2). Kyoto.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semi-Automatic Acquisition of Domain-Specific Translation Lexicons",
"authors": [
{
"first": "P",
"middle": [
"I D"
],
"last": "Resnik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 7th ACL Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Resnik, P. & I. D. Melamed. (1997) \"Semi-Automatic Acquisition of Domain-Specific Translation Lexicons.\" In Proceedings of the 7th ACL Conference on Applied Natural Language Processing. Washington DC.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Translating Collocations for Bilingual Lexicons: A Statistical Approach",
"authors": [
{
"first": "F",
"middle": [],
"last": "Smadja",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "&",
"middle": [
"V"
],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smadja F., K. McKeown, & V. Hatzivassiloglou, (1996) ''Translating Collocations for Bilingual Lexicons: A Statistical Approach.\" In Computational Linguistics, Vol. 22 No. 1.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automatic Lexicon Extraction from Aligned Bilingual Corpora",
"authors": [
{
"first": "J6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tiedemann, J6rg. (1997) \"Automatic Lexicon Extraction from Aligned Bilingual Corpora.\" Diploma Thesis, Otto- von-Guericke-Universitfit Magdeburg.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "iii) Open class expressions that constitute a sentence on their own (not counting i~xelevant word tokens) generate translation pairs with the open class expressions of the corresponding sentence.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF1": {
"content": "<table><tr><td>_~</td><td>Swedish</td></tr><tr><td>after.all</td><td>n~ir atlt kommcr omkring</td></tr><tr><td>in spite of</td><td>trots</td></tr><tr><td>U~__~cneral</td><td>i allmiinhet</td></tr><tr><td>1. The Problem</td><td/></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF2": {
"content": "<table><tr><td/><td colspan=\"2\">Novel Prog. Man.</td></tr><tr><td>Size in running words</td><td>66,693</td><td>169,779</td></tr><tr><td>No of word types</td><td>9,917</td><td>3,828</td></tr><tr><td>Word types frequency 3 or</td><td>2,870</td><td>2,274</td></tr><tr><td>Word types frequency 2 or 1</td><td>7,047</td><td>1,554</td></tr><tr><td>Multi-word expression t~q~cs</td><td>243</td><td>981</td></tr><tr><td>(fot~n d in pre-processin~g)</td><td/><td/></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
}
}
}
}