|
{ |
|
"paper_id": "W03-0313", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T06:12:26.898135Z" |
|
}, |
|
"title": "Translation Spotting for Translation Memories", |
|
"authors": [ |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Simard", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Laboratoire de recherche appliqu\u00e9e en linguistique informatique (RALI) D\u00e9partement d'informatique", |
|
"institution": "succursale Centre-ville", |
|
"location": { |
|
"postCode": "2241, H3C 3J7", |
|
"settlement": "Local, Montr\u00e9al (Qu\u00e9bec)", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The term translation spotting (TS) refers to the task of identifying the target-language (TL) words that correspond to a given set of sourcelanguage (SL) words in a pair of text segments known to be mutual translations. This article examines this task within the context of a sub-sentential translation-memory system, i.e. a translation support tool capable of proposing translations for portions of a SL sentence, extracted from an archive of existing translations. Different methods are proposed, based on a statistical translation model. These methods take advantage of certain characteristics of the application, to produce TL segments submitted to constraints of contiguity and compositionality. Experiments show that imposing these constraints allows important gains in accuracy, with regard to the most probable alignments predicted by the model.", |
|
"pdf_parse": { |
|
"paper_id": "W03-0313", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The term translation spotting (TS) refers to the task of identifying the target-language (TL) words that correspond to a given set of sourcelanguage (SL) words in a pair of text segments known to be mutual translations. This article examines this task within the context of a sub-sentential translation-memory system, i.e. a translation support tool capable of proposing translations for portions of a SL sentence, extracted from an archive of existing translations. Different methods are proposed, based on a statistical translation model. These methods take advantage of certain characteristics of the application, to produce TL segments submitted to constraints of contiguity and compositionality. Experiments show that imposing these constraints allows important gains in accuracy, with regard to the most probable alignments predicted by the model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Translation spotting is the term coined by V\u00e9ronis and Langlais (2000) for the task of identifying the wordtokens in a target-language (TL) translation that correspond to some given word-tokens in a source-language (SL) text. Translation spotting (TS) takes as input a couple, i.e. a pair of SL and TL text segments, which are known to be translations of one another, and a SL query, i.e. a subset of the tokens of the SL segment, on which the TS will focus its attention. The result of the TS process consists of two sets of tokens, i.e. one for each language. We call these sets the SL and TL answers to the query.", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 70, |
|
"text": "V\u00e9ronis and Langlais (2000)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In more formal terms:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 The input to the TS process is a pair of SL and TL text segments S, T , and a contiguous, non-empty sequence of word-tokens in S, q = s i1 ...s i2 (the query).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 The output is a pair of sets of tokens r q (S), r q (T ) , the SL answer and TL answer respectively. Figure 1 shows some examples of TS, where the words in italics represent the SL query, and the words in bold are the SL and TL answers.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 111, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As can be seen in these examples, the tokens in the query q and answers r q (S) and r q (T ) may or may not be contiguous (examples 2 and 3), and the TL answer may possibly be empty (example 4) when there is no satisfying way of linking TL tokens to the query.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Translation spotting finds different applications, for example in bilingual concordancers, such as the TransSearch system (Macklovitch et al., 2000) , and example-based machine translation (Brown, 1996) . In this article, we focus on a different application: a subsentential translation memory. We describe this application context in section 2, and discuss how TS fits in to this type of system. We then propose in section 3 a series of TS methods, specifically adapted to this application context. In section 4, we present an empirical evaluation of the proposed methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 148, |
|
"text": "(Macklovitch et al., 2000)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 202, |
|
"text": "(Brown, 1996)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A translation memory system is a type of translation support tool whose purpose is to avoid the re-translation of segments of text for which a translation has previously been produced. Typically, these systems are integrated to a word-processing environment. Every sentence that the user translates within this environment is stored in a database (the translation memory -or TM). Whenever the system encounters some new text that matches a sentence in the TM, its translation is retrieved and proposed to the translator for reuse. J'ai eu la chance de voyager pendant pr\u00e8s de 40 ans .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sub-sentential Translation Memory Systems", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To the extent that the Canadian government could be open, it has been so.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "to the extent that", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Le gouvernement canadien a\u00e9t\u00e9 aussi ouvert qu'il le pouvait.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "to the extent that", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "As suggested in the above paragraph, existing systems essentially operate at the level of sentences: the TM is typically made up of pairs of sentences, and the system's proposals consist in translations of complete sentences. Because the repetition of complete sentences is an extremely rare phenomenon in general language, this level of resolution limits the usability of TM's to very specific application domains -most notably the translation of revised or intrinsically repetitive documents. In light of these limitations, some proposals have recently been made regarding the possibility of building TM systems that operate \"below\" the sentence level, or sub-sentential translation memories (SSTM) -see for example (Lang\u00e9 et al., 1997; McTait et al., 1999) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 718, |
|
"end": 738, |
|
"text": "(Lang\u00e9 et al., 1997;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 739, |
|
"end": 759, |
|
"text": "McTait et al., 1999)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 1: Translation spotting examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Putting together this type of system raises the problem of automatically establishing correspondences between arbitrary sequences of words in the TM, or, in other words, of \"spotting translations\". This process (translation spotting) can be viewed as a by-product of wordalignment, i.e. the problem of establishing correspondences between the words of a text and those of its translation: obviously, given a complete alignment between the words of the SL and TL texts, we can extract only that part of the alignment that concerns the TS query; conversely, TS may be seen as a sub-task of the wordalignment problem: a complete word-alignment can be obtained by combining the results of a series of TS operations, covering the entirety of the SL text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 1: Translation spotting examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "From the point of view of an SSTM application, the TS mechanism should find the TL segments that are the most likely to be useful to the translator in producing the translation of a given SL sentence. In the end, the final criterion by which a SSTM will be judged is profitability: to what extent do the system's proposals enable the user to save time and/or effort in producing a new translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 1: Translation spotting examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "From that perspective, the two most important characteristics of the TL answers are relevance, i.e. whether or not the system's TL proposals constitute valid translations for some part of the source sentence; and coherence, i.e. whether the proposed segments are well-formed, at least from a syntactic point of view. As suggested by McTait et al. (1999) , \"linguistically motivated\" sub-sentential entities are more likely than arbitrary sequences of words to lead to useful proposals for the user. Planas (2000) proposes a fairly simple approach for an SSTM: his system would operate on sequences of syntactic chunks, as defined by Abney (1991) . Both the contents of the TM and the new text under consideration would be segmented into chunks; sequences of chunks from the new text would then be looked up verbatim in the TM; the translation of the matched sequences would be proposed to the user as partial translations of the current input. Planas's case for using sequences of chunks as the unit of translation for SSTM's is supported by the coherence criterion above: chunks constitute \"natural\" textual units, which users should find easier to grasp and reuse than arbitrary sequences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 333, |
|
"end": 353, |
|
"text": "McTait et al. (1999)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 499, |
|
"end": 512, |
|
"text": "Planas (2000)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 633, |
|
"end": 645, |
|
"text": "Abney (1991)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 1: Translation spotting examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The coherence criterion also supports the case for contiguous TL proposals, i.e. proposals that take the form of contiguous sequences of tokens from the TM, as opposed to discontiguous sets such as those of examples 2 and 3, in figure 1. This also makes intuitive sense from the more general point of view of profitability: manually \"filling holes\" within a discontiguous proposal is likely to be time-consuming and counter-productive. On the other hand, filling those holes automatically, as proposed for example by Lang\u00e9 et al. and McTait et al. , raises numerous problems with regard to syntactic and semantic wellformedness of the TL proposals. In theory, contiguous sequences of token from the TM should not suffer from such ills.", |
|
"cite_spans": [ |
|
{ |
|
"start": 517, |
|
"end": 547, |
|
"text": "Lang\u00e9 et al. and McTait et al.", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 1: Translation spotting examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, and perhaps more importantly, in a SSTM application such as that proposed by Planas, there appears to be statistical argument in favor of contiguous TL proposals: the more frequent a contiguous SL sequences, the more likely it is that its TL equivalent is also contiguous. In other words, there appears to be a natural tendency for frequently-occurring phrases and formulations to correspond to like-structured sequences in other languages. This will be discussed further in section 4. But clearly, a TS mechanism intended for such a SSTM should take advantage of this tendency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 1: Translation spotting examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this section, we propose various TS methods, specifically adapted to a SSTM application such as that proposed by Planas (2000) , i.e. one which takes as translation unit contiguous sequences of syntactic chunks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 129, |
|
"text": "Planas (2000)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TS Methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As mentioned earlier, TS can be seen as a bi-product of word-level alignments. Such alignments have been the focus of much attention in recent years, especially in the field of statistical translation modeling, where they play an important role in the learning process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Viterbi TS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For the purpose of statistical translation modeling, Brown et al. (1993) define an alignment as a vector a = a 1 ...a m that connects each word of a source-language text S = s 1 ...s m to a target-language word in its translation T = t 1 ...t n , with the interpretation that word t aj is the translation of word s j in S (a j = 0 is used to denote words of s that do not produce anything in T ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 72, |
|
"text": "Brown et al. (1993)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Viterbi TS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Brown et al. also define the Viterbi alignment between source and target sentences S and T as the alignment a whose probability is maximal under some translation model:\u00e2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Viterbi TS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "= argmax a\u2208A Pr M (a|S, T )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Viterbi TS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where A is the set of all possible alignments between S and T , and Pr M (a|S, T ) is the estimate of a's probability under model M, which we denote Pr(a|S, T ) from hereon. In general, the size of A grows exponentially with the sizes of S and T , and so there is no efficient way of computing\u00e2 efficiently. However, under Model 2, the probability of an alignment a is given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Viterbi TS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Pr(a|S, T ) = m i=1 Pr(a i |i, m, n)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Viterbi TS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Viterbi TS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Pr(j|i, m, n) = \u03b3(j, i, m, n) n J=0 \u03b3(J, i, m, n) ,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Viterbi TS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "and \u03b3(j, i, m, n) = t(s i |t j )a(j, i, m, n)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Viterbi TS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In this last equation, t(s i |t j ) is the model's estimate of the \"lexical\" distribution p(s i |t j ), while a(j, i, m, n) estimates the \"alignment\" distribution p(j|i, m, n). Therefore, with this model, the Viterbi alignment can be obtained by simply picking for each position i in S, the alignment that maximizes t(s i |t j )a(j, i, m, n). This procedure can trivially be carried out in O(mn) operations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Viterbi TS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Because of this convenient property, we base the rest of this work on this model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Viterbi TS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Adapting this procedure to the TS task is straightforward: given the TS query q, produce as TL answer the corresponding set of TL tokens in the Viterbi alignment: r q (T ) = {t\u00e2 i 1 , ..., t\u00e2 i 2 } (the SL answer is simply q itself). We call this method Viterbi TS: it corresponds to the most likely alignment between the query q and TL text T , given the probability estimates of the translation model. If q contains I tokens, the Model 2 Viterbi TS can be computed in O(In) operations. Figure 2 shows an example of the result of this process. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 488, |
|
"end": 496, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Viterbi TS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The tokens of the TL answer produced by Viterbi TS are not necessarily contiguous in T which, as remarked earlier, is problematic in a TM application. Various a posteriori processings on r q (T ) are possible to fix this; we list here only the most obvious:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-processings", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "expansion : Take the minimum and maximum values in {\u00e2 i1 , ...,\u00e2 i2 }, and produce the sequence t min ai ...t max ai ; in other words, produce as TL answer the smallest contiguous sequence in T that contains all the tokens of r q (T ). longest-sequence : Produce the subset of r q (T ) that constitutes the longest contiguous sequence in T . zero-tolerance : If the tokens in r q (T ) cannot be arranged in a contiguous sequence of T , then simply discard the whole TL answer. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-processings", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The various independence assumptions underpinning IBM Model 2 often have negative effects on the resulting Viterbi alignments. In particular, this model assumes rq(T ) = {le, engagement, du, gouvernement} post-processing: expansion : X(rq(T )) = le v\u00e9ritable engagement du gouvernement longest-sequence : L(rq(T )) = engagement du gouvernement zero-tolerance : Z(rq(T )) = \u2205 Figure 3 : Post-processings on Viterbi TS that all connections within an alignment are independent of each other, which leads to numerous aberrations in the alignments. Typically, each SL token gets connected to the TL token with which it has the most \"lexical affinities\", regardless of other existing connections in the alignment and, more importantly, of the relationships this token holds with other SL tokens in its vicinity. Conversely, some TL tokens end up being connected to several SL tokens, while other TL tokens are left unconnected.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 375, |
|
"end": 383, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contiguous TS", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As mentioned in section 2, in a sub-sentential TM application, contiguous sequences of tokens in the SL tend to translate into contiguous sequences in the TL. This suggests that it might be a good idea to integrate a \"contiguity constraint\" right into the alignment search procedure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contiguous TS", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For example, we can formulate a variant of the Viterbi TS method above, which looks for the alignment that maximizes Pr(a|S, T ), under the constraint that the TL tokens aligned with the SL query must be contiguous. Consider a procedure that seeks the (possibly null) sequence t j1 ...t j2 of T , that maximizes:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contiguous TS", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Pr(a q |s i2 i1 , t j2 j1 )Pr(aq|s i1\u22121 1 s m i2+1 , t j1\u22121 1 t n j2+1 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contiguous TS", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Such a procedure actually produces two distinct alignments over S and T : an alignment a q , which connects the query tokens (the sequence s i2 i1 ) with a sequence of contiguous tokens in T (the sequence t j2 j1 ), and an alignment aq, which connects the rest of sentence S (i.e. all the tokens outside the query) with the rest of T . Together, these two alignments constitute the alignment a = a q \u222a aq, whose probability is maximal, under a double constraint:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contiguous TS", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "1. the query tokens s i2 i1 can only be connected to tokens within a contiguous region of T (the sequences t j2 j1 ); 2. the tokens outside the query (in either one of the two sequences s i1\u22121 1 and s m i2+1 ) can only get connected to tokens outside t j2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contiguous TS", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "j1 . With such an alignment procedure, we can trivially devise a TS method, which will return the optimal t j2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contiguous TS", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "j1 as TL answer. We call this method Contiguous TS. Alignments satisfying the above constraints can be obtained directly, by computing Viterbi alignments a q and aq for each pair of target positions j 1 , j 2 . The TS procedure then retains the pair of TL language positions that maximizes the joint probability of alignments a q and aq. This operation requires the computation of two Viterbi alignments for each pair j 1 , j 2 , i.e. n(n \u2212 1) Viterbi alignments, plus a \"null\" alignment, corresponding to the situation where t j2 j1 = \u2205. Overall, using IBM Model 2, the operation requires O(mn 3 ) operations. Figure 4 illustrates a contiguous TS obtained on the example of figure 2.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 611, |
|
"end": 619, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contiguous TS", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Let us see \u2192 Voyons where \u2192 quel ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "aq = the \u2192 engagement government \u2192 gouvernement 's \u2192 du commitment \u2192 engagement is \u2192 est really \u2192 v\u00e9ritable at \u2192 la in terms of \u2192 envers the \u2192 la farm \u2192 agricole community \u2192 communaut\u00e9 . \u2192 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As pointed out in section 3.3, In IBM-style alignments, a single TL token can be connected to several SL tokens, which sometimes leads to aberrations. This contrasts with alternative alignment models such as those of Melamed (1998) and Wu (1997) , which impose a \"one-to-one\" constraint on alignments. Such a constraint evokes the notion of compositionality in translation: it suggests that each SL token operates independently in the SL sentence to produce a single TL token in the TL sentence, which then depends on no other SL token. This view is, of course, extreme, and real-life translations are full of examples (idiomatic expressions, terminology, paraphrasing, etc.) that show how this compositionality principle breaks down as we approach the level of word correspondences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 245, |
|
"text": "Wu (1997)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional TS", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "However, in a TM application, TS usually needs not go down to the level of individual words. Therefore, compositionality can often be assumed to apply, at least to the level of the TS query. The contiguous TS method pro-posed in the previous section implicitly made such an assumption. Here, we push it a little further.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional TS", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Consider a procedure that splits each the source and target sentences S and T into two independent parts, in such a way as to maximise the probability of the two resulting Viterbi alignments:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional TS", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "argmax i,j,d \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 d = 1 : Pr(a 1 |s i 1 , t j 1 ) \u00d7Pr(a 2 |s m i+1 , t n j+1 ) d = \u22121 : Pr(a 1 |s i 1 , t n j+1 ) \u00d7Pr(a 2 |s m i+1 , t j 1 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional TS", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "In the triple i, j, d above, i represents a \"split point\" in the SL sentence S, j is the analog for TL sentence T , and d is the \"direction of correspondence\": d = 1 denotes a \"parallel correspondence\", i.e. s 1 ...s i corresponds to t 1 ...t j and s i+1 ...s m corresponds to t j+1 ...t n ; d = \u22121 denotes a \"crossing correspondence\", i.e. s 1 ...s i corresponds to t j+1 ...t n and s i+1 ...s m corresponds to t 1 ...t j .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional TS", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The triple I, J, D produced by this procedure refers to the most probable alignment between S and T , under the hypothesis that both sentences are made up of two independent parts (s 1 ...s I and s I+1 ...s m on the one hand, t 1 ...t J and t J+1 ...t n on the other), that correspond to each other two-by-two, following direction D. Such an alignment suggests that translation T was obtained by \"composing\" the translation of s 1 ...s I with that of s I+1 ...s m .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional TS", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "This \"splitting\" process can be repeated recursively on each pair of matching segments, down to the point where each SL segment contains a single token. (TL segments can always be split, even when empty, because IBM-style alignments make it possible to connect SL tokens to the \"null\" TL token, which is always available.) This gives rise to a word-alignment procedure that we call Compositional word alignment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional TS", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "This procedure actually produces two different outputs: first, a parallel partition of S and T into m pairs of segments s i , t k j , where each t k j is a (possibly null) contiguous sub-sequence of T ; second, an IBM-style alignment, such that each SL and TL token is linked to at most one token in the other language: this alignment is actually the concatenation of individual Viterbi alignments on the s i , t k j pairs, which connects each s i to (at most) one of the tokens in the corresponding t k j . Of course, such alignments face even worst problems than ordinary IBM-style alignments when confronted with non-compositional translations. However, when adapting this procedure to the TS task, we can hypothesize that compositionality applies, at least to the level of the SL query. This adaptation proceeds along the following modifications to the alignment procedure described above:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional TS", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "1. forbid splittings within the SL query:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional TS", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "i 1 \u2264 i \u2264 i 2 ;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional TS", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "2. at each level of recursion, only consider that pair of segments which contains the SL query;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional TS", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "3. stop the procedure as soon as it is no longer possible to split the SL segment, i.e. it consists of s i1 ...s i2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositional TS", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The TL segment matched with s i1 ...s i2 when the procedure terminates is the TL answer. We call this procedure Compositional TS. It can be shown that it can be carried out in O(m 3 n 2 ) operations in the worst case, and O(m 2 n 2 log m) on average. Furthermore, by limiting the search to split points yielding matching segments of comparable sizes, the number of required operations can be cut by one order of magnitude (Simard, 2003) . Figure 5 shows how this procedure splits the example pair of figure 2 (the query is shown in italics).", |
|
"cite_spans": [ |
|
{ |
|
"start": 422, |
|
"end": 436, |
|
"text": "(Simard, 2003)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 439, |
|
"end": 447, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Compositional TS", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We describe here a series of experiments that were carried out to evaluate the performance of the TS methods described in section 3. We essentially identified a number of SL queries, looked up these segments in a TM to extract matching pairs of SL-TL sentences, and manually identified the TL tokens corresponding to the SL queries in each of these pairs, hence producing manual TS's. We then submitted the same sentence-pairs and SL queries to each of the proposed TS methods, and measured how the TL answers produced automatically compared with those produced manually. We describe this process and the results we obtained in more details below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The test material for our experiments was gathered from a translation memory, made up of approximately 14 years of Hansard (English-French transcripts of the Canadian parliamentary debates), i.e. all debates published between April 1986 and January 2002, totalling over 100 million words in each language. These documents were mostly collected over the Internet, had the HTML markup removed, were then segmented into paragraphs and sentences, aligned at the sentence level using an implementation of the method described in (Simard et al., 1992) , and finally dumped into a document-retrieval system (MG (Witten et al., 1999) ). We call this the Hansard TM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 524, |
|
"end": 545, |
|
"text": "(Simard et al., 1992)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 600, |
|
"end": 625, |
|
"text": "(MG (Witten et al., 1999)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Material", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To identify SL queries, a distinct document from the Hansard was used, the transcript from a session held in March 2002. The English version of this document was segmented into syntactic chunks, using an implementation of Osborne's chunker (Osborne, 2000) . All sequences of chunks from this text that contained three or more word tokens were then looked up in the Hansard TM. Among the sequences that did match sentences in the TM, 100 were selected at random. These made up the test SL queries. While some SL queries yielded only a handful of matches in the TM, others turned out to be very productive, producing hundreds (and sometimes thousands) of couples. For each test segment, we retained only the 100 first matching pair of sentences from the TM. This process yielded 4100 pairs of sentences from the TM, an average of 41 per SL query; we call this our test corpus. Within each sentence pair, we spotted translations manually, i.e. we identified by hand the TL word-tokens corresponding to the SL query for which the pair had been extracted. These annotations were done following the TS guidelines proposed by V\u00e9ronis (1998) ; we call this the reference TS.", |
|
"cite_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 255, |
|
"text": "(Osborne, 2000)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1119, |
|
"end": 1133, |
|
"text": "V\u00e9ronis (1998)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Material", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The results of our TS methods on the test corpus were compared to the reference TS, and performance was measured under different metrics. Given each pair S, T from the test corpus, and the corresponding reference and evaluated TL answers r * and r, represented as sets of tokens, we computed:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "exactness : equal to 1 if r * = r, 0 otherwise;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "recall : |r * \u2229 r|/|r * | precision : |r * \u2229 r|/|r| F-measure : 2 |r\u2229r * | |r|+|r * |", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In all the above computations, we considered that \"empty\" TL answers (r = \u2205) actually contained a single \"null\" word. These metrics were then averaged over all pairs of the test corpus (and not over SL queries, which means that more \"productive\" queries weight more heavily in the reported results).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We tested all three methods presented in section 3, as well as the three \"post-processings\" on Viterbi TS proposed in section 3.2. All of these methods are based on IBM Model 2. The same model parameters were used for all the experiments reported here, which were computed with the GIZA program of the Egypt toolkit (Al-Onaizan et al., 1999) . Training was performed on a subset of about 20% of the Hansard TM. The Zero-tolerance post-processing produces empty TL answers whenever the TL tokens are not contiguous. On our test corpus, over 70% of all Viterbi alignments turned out to be non-contiguous. These empty TL answers were counted in the statistics above (Viterbi + Zero-tolerance row), which explains the low performance obtained with this method. In practice, the intention of Zero-tolerance post-processing is to filter out non-contiguous answers, under the hypotheses that they probably would not be usable in a TM application. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 316, |
|
"end": 341, |
|
"text": "(Al-Onaizan et al., 1999)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Globally, in terms of exactness, compositional TS produces the best TL answers, with 40% correct answers, an improvement of 135% over plain Viterbi TS. This gain is impressive, particularily considering the fact that all methods use exactly the same data. In more realistic terms, the gain in F -measure is over 20%, which is still considerable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The best results in terms of precision are obtained with contiguous TS, which in fact is not far behind compositional TS in terms of recall either. This clearly demonstrates the impact of a simple contiguity constraint in this type of TS application. Overall, the best recall figures are obtained with the simple Extension post-processing on Viterbi TS, but at the cost of a sharp decrease in precision. Considering that precision is possibly more important than recall in a TM application, the contiguous TS would probably be a good choice.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The Zero-tolerance strategy, used as a filter on Viterbi alignments, turns out to be particularily effective. It is interesting to note that this method is equivalent to the one proposed by Marcu (Marcu, 2001) to automatically construct a sub-sentential translation memory. Taking only non-null TS's into consideration, it outclasses all other methods, regardless of the metric. But this is at the cost of eliminating numerous potentially useful TL answers (more than 70%). This is particularily frustrating, considering that over 90% of all TL answers in the reference are indeed contiguous.", |
|
"cite_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 209, |
|
"text": "(Marcu, 2001)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "To understand how this happens, one must go back to the definition of IBM-style alignments, which specifies that each SL token is linked to at most one TL token. This has a direct consequence on Viterbi TS's: if the SL queries contains K word-tokens, then the TL answer will itself contain at most that number of tokens. As a result, this method has systematic problems when the actual TL answer is longer than the SL query. It turns out that this occurs very frequently, especially when aligning from English to French, as is the case here. For example, consider the English sequence airport security, most often translated in French as s\u00e9curit\u00e9 dans les a\u00e9roports. The Viterbi alignment normally produces links airport \u2192 a\u00e9roport and security \u2192 s\u00e9curit\u00e9, and the sequence dans les is then left behind (or accidentally picked up by erroneous links from other parts of the SL sentence), thus leaving a non-contiguous TL answer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The Expansion post-processing, which finds the shortest possible sequence that covers all the tokens of the Viterbi TL answer, solves the problem in simple situations such as the one in the above example. But in general, integrating contiguity constraints directly in the search procedure (contiguous and compositional TS) turns out to be much more effective, without solving the problem entirely. This is explained in part by the fact that these techniques are also based on IBM-style alignments. When \"surplus\" words appear at the boundaries of the TL answer, these words are not counted in the alignment probability, and so there is no particular reason to include them in the TL answer. Consider the following example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 These companies indicated their support for the government 's decision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 Ces compagnies ont d\u00e9clar\u00e9 qu' elles appuyaient la d\u00e9cision du gouvernement .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "When looking for the French equivalent to the English indicated their support, we will probably end up with an alignment that links indicated \u2192 d\u00e9clar\u00e9 and support \u2192 appuyaient. As a result of contiguity constraints, the TL sequence qu' elle will naturally be included in the TL answer, possibly forcing a link their \u2192 elles in the process. However, the only SL that could be linked to ont is the verb indicated, which is already linked to d\u00e9clar\u00e9. As a result, ont will likely be left behind in the final alignment, and will not be counted when computing the alignment's probability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We have presented different translation spottings methods, specifically adapted to a sub-sentential translation memory system that proposes TL translations for SL sequences of syntactic chunks, as proposed by Planas (2000) . These methods are based on IBM statistical translation Model 2 (Brown et al., 1993) , but take advantage of certain characteristics of the segments of text that can typically be extracted from translation memories. By imposing contiguity and compositionality constraints on the search procedure, we have shown that it is possible to perform translation spotting more accurately than by simply relying on the most likely word alignment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 222, |
|
"text": "Planas (2000)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 308, |
|
"text": "(Brown et al., 1993)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Yet, the accuracy of our methods still leave a lot to be desired; on closer examination most of our problems can be attributed to the underlying translation model. Computing word alignments with IBM Model 2 is straightforward and efficient, which made it a good choice for experimenting; however, this model is certainly not the state of the art in statistical translation modeling. Thenagain, the methods proposed here were all based on the idea of finding the most likely word-alignment under various constraints. This approach is not dependent on the underlying translation model, and similar methods could certainly be devised based on more elaborate models, such as IBM Models 3-5, or the HMM-based models proposed by Och et al. (1999) for example.", |
|
"cite_spans": [ |
|
{ |
|
"start": 723, |
|
"end": 740, |
|
"text": "Och et al. (1999)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Alternatively, there are other ways to compensate for Model 2's weaknesses. Each IBM-style alignment between two segments of text denotes one particular explanation of how the TL words emerged from the SL words, but it doesn't tell the whole story. Basing our TS methods on a set of likely alignments rather than on the single most-likely alignment, as is normally done to estimate the parameters of higher-level models, could possibly lead to more accurate TS results. Similarly, TS applications are not bound to translation directionality as statistical translation systems are; this means that we could also make use of a \"reverse\" model to obtain a better estimate of the likelihood of two segments of text being mutual translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "These are all research directions that we are currently pursuing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Parsing by Chunks", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Abney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Principle-Based Parsing: Computation and Psycholinguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "257--278", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Abney. 1991. Parsing by Chunks. In R.C. Berwick, editor, Principle-Based Parsing: Computation and Psycholinguistics, pages 257-278.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The Netherlands", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kluwer Academic Publishers, Dordrecht, The Nether- lands.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Statistical Machine Translation -Final Report", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Al-Onaizan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "JHU Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Al-Onaizan et al.1999] Yaser Al-Onaizan, Jan Curin, Michael Jahr, Kevin Knight, John Lafferty, Dan Melamed, Franz-Josef Och, David Purdy, Noah H. Smith, and David Yarowsky. 1999. Statistical Ma- chine Translation -Final Report, JHU Workshop 1999. Technical report, Johns Hopkins University.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The Mathematics of Machine Translation: Parameter Estimation", |
|
"authors": [ |
|
{ |
|
"first": "[", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Brown et al.1993] Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mer- cer. 1993. The Mathematics of Machine Transla- tion: Parameter Estimation. Computational Linguis- tics, 19(2):263-311.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Example-Based Machine Translation in the Pangloss System", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ralf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the International Conference on Computational Linguistics (COLING) 1996", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "169--174", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ralf D. Brown. 1996. Example-Based Ma- chine Translation in the Pangloss System. In Proceed- ings of the International Conference on Computational Linguistics (COLING) 1996, pages 169-174, Copen- hagen, Denmark, August.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Bricks and Skeletons: Some Ideas for the Near Future of MAHT", |
|
"authors": [ |
|
{ |
|
"first": "[", |
|
"middle": [], |
|
"last": "Lang\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Machine Translation", |
|
"volume": "12", |
|
"issue": "1-2", |
|
"pages": "39--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Lang\u00e9 et al.1997] Jean-Marc Lang\u00e9,\u00c9ric Gaussier, and B\u00e9atrice Daille. 1997. Bricks and Skeletons: Some Ideas for the Near Future of MAHT. Machine Trans- lation, 12(1-2):39-51.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "TransSearch: A Free Translation Memory on the World Wide Web", |
|
"authors": [ |
|
{ |
|
"first": "[", |
|
"middle": [], |
|
"last": "Macklovitch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the Second International Conference on Language Resources & Evaluation (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Macklovitch et al.2000] Elliott Macklovitch, Michel Simard, and Philippe Langlais. 2000. TransSearch: A Free Translation Memory on the World Wide Web. In Proceedings of the Second International Conference on Language Resources & Evaluation (LREC), Athens, Greece.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Towards a Unified Approach to Memory-and Statistical-Based Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Marcu. 2001. Towards a Unified Approach to Memory-and Statistical-Based Machine Translation. In Proceedings of the 39th Annual Meet- ing of the Association for Computational Linguistics (ACL), Toulouse, France, July.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Improved Alignment Models for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "[", |
|
"middle": [], |
|
"last": "Mctait", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 4th Conference on Empirical Methods in Natural Language Processing (EMNLP)and 7th ACL Workshop on Very Large Corpora (WVLC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "20--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[McTait et al.1999] Kevin McTait, Maeve Olohan, and Arturo Trujillo. 1999. A Building Blocks Approach to Translation Memory. In Proceedings of the 21st ASLIB International Conference on Translating and the Com- puter, London, UK. [Melamed1998] I. Dan Melamed. 1998. Word-to-Word Models of Translational Equivalence. Technical Re- port 98-08, Dept. of Computer and Information Sci- ence, University of Pennsylvania, Philadelphia, USA. [Och et al.1999] Franz Josef Och, Christoph Tillmann, and Hermann Ney. 1999. Improved Alignment Mod- els for Statistical Machine Translation. In Proceedings of the 4th Conference on Empirical Methods in Natu- ral Language Processing (EMNLP)and 7th ACL Work- shop on Very Large Corpora (WVLC), pages 20-28, College Park, USA.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Shallow Parsing as Part-of-Speech Tagging", |
|
"authors": [ |
|
{ |
|
"first": "Miles", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the Fourth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miles Osborne. 2000. Shallow Parsing as Part-of-Speech Tagging. In Claire Cardie, Wal- ter Daelemans, Claire N\u00e9dellec, and Erik Tjong Kim Sang, editors, Proceedings of the Fourth Conference on Computational Natural Language Learning, Lis- bon, Portugal, September.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Extending Translation Memories", |
|
"authors": [ |
|
{ |
|
"first": "Emmanuel", |
|
"middle": [], |
|
"last": "Planas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "EAMT Machine Translation Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emmanuel Planas. 2000. Extending Trans- lation Memories. In EAMT Machine Translation Workshop, Ljubljana, Slovenia, May.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Using Cognates to Align Sentences in Bilingual Corpora", |
|
"authors": [ |
|
{ |
|
"first": "[", |
|
"middle": [], |
|
"last": "Simard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the 4th Conference on Theoretical and Methodological Issues in Machine Translation (TMI)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--82", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Simard et al.1992] Michel Simard, George Foster, and Pierre Isabelle. 1992. Using Cognates to Align Sen- tences in Bilingual Corpora. In Proceedings of the 4th Conference on Theoretical and Methodological Issues in Machine Translation (TMI), pages 67-82, Montr\u00e9al, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "M\u00e9moires de traduction sous-phrastiques", |
|
"authors": [ |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Simard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michel Simard. 2003. M\u00e9moires de tra- duction sous-phrastiques. Ph.D. thesis, Universit\u00e9 de Montr\u00e9al. to appear.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Evaluation of Parallel Text Alignment Systems -The ARCADE Project", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "V\u00e9ronis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Langlais", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Parallel Text Processing, Text, Speech and Language Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[V\u00e9ronis and Langlais2000] Jean V\u00e9ronis and Philippe Langlais. 2000. Evaluation of Parallel Text Alignment Systems -The ARCADE Project. In Jean V\u00e9ronis, ed- itor, Parallel Text Processing, Text, Speech and Lan- guage Technology. Kluwer Academic Publishers, Dor- drecht, The Netherlands.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Managing Gigabytes: Compressing and Indexing Documents and Images", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "V\u00e9ronis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean V\u00e9ronis. 1998. Tagging guidelines for word alignment. http://www.up.univ-mrs.fr/ vero- nis/arcade/2nd/word/guide/index.html, April. [Witten et al.1999] Ian H. Witten, Alistair Moffat, and Timothy C. Bell. 1999. Managing Gigabytes: Com- pressing and Indexing Documents and Images. Mor- gan Kaufmann Publishing, San Francisco, USA, 2nd edition edition.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Dekai", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Computational Linguistics", |
|
"volume": "23", |
|
"issue": "3", |
|
"pages": "377--404", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dekai Wu. 1997. Stochastic Inversion Trans- duction Grammars and Bilingual Parsing of Parallel Corpora. Computational Linguistics, 23(3):377-404, September.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "query : the government 's commitment couple: S = Let us see where the government's commitment is really at in terms of the farm community. T = Voyons quel est le v\u00e9ritable engagement du gouvernement envers la communaut\u00e9 agricole. Viterbi alignment on query tokens: the \u2192 le government \u2192 gouvernement 's \u2192 du commitment \u2192 engagement TL answer: T = Voyons quel est le v\u00e9ritable engagement du gouvernement envers la communaut\u00e9 agricole.", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "Viterbi TS example", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "illustrates how these three strategies affect the Viterbi TS of figure 2.", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"text": "TL answer: T = Voyons quel est le v\u00e9ritable engagement du gouvernement envers la communaut\u00e9 agricole.", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"text": "Contiguous TS Example", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"text": "Compositional TS Example", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table/>", |
|
"text": "Results of experiments", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table><tr><td>Metric</td></tr></table>", |
|
"text": "presents the performance of this method, taking into account only non-empty answers.", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table/>", |
|
"text": "Performance of zero-tolerance filter on nonempty TL answers", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |