Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W03-0312",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:11:30.650826Z"
},
"title": "Word Selection for EBMT based on Monolingual Similarity and Translation Confidence",
"authors": [
{
"first": "Eiji",
"middle": [
"Aramaki"
],
"last": "\u00dd\u00fe",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kurohashi",
"middle": [],
"last": "Sadao",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hideki",
"middle": [],
"last": "\u00dd\u00fe",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Kashioka",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Tanaka",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a method of constructing an example-based machine translation (EBMT) system that exploits a content-aligned bilingual corpus. First, the sentences and phrases in the corpus are aligned across the two languages, and the pairs with high translation confidence are selected and stored in the translation memory. Then, for a given input sentences, the system searches for fitting examples based on both the monolingual similarity and the translation confidence of the pair, and the obtained results are then combined to generate the translation. Our experiments on translation selection showed the accuracy of 85% demonstrating the basic feasibility of our approach.",
"pdf_parse": {
"paper_id": "W03-0312",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a method of constructing an example-based machine translation (EBMT) system that exploits a content-aligned bilingual corpus. First, the sentences and phrases in the corpus are aligned across the two languages, and the pairs with high translation confidence are selected and stored in the translation memory. Then, for a given input sentences, the system searches for fitting examples based on both the monolingual similarity and the translation confidence of the pair, and the obtained results are then combined to generate the translation. Our experiments on translation selection showed the accuracy of 85% demonstrating the basic feasibility of our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The basic idea of example-based machine translation, or EBMT, is that translation examples similar to a part of an input sentence are retrieved and combined to produce a translation (Nagao, 1984) . In order to make a practical MT system based on this approach, a large number of translation examples with structural correspondences are required. This naturally presupposes high-accuracy parsers and well-aligned large bilingual corpora.",
"cite_spans": [
{
"start": 182,
"end": 195,
"text": "(Nagao, 1984)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Over the last decade, the accuracy of the parsers improved significantly. The availability of well-aligned bilingual corpora, however, has not increased despite our expectations. In reality, the number of bilingual corpora that share the same content, such as newspapers and broadcast news, has increased steadily. We call this type of corpus a content-aligned corpus. With these observations, we started a research project that covered all aspects of constructing EBMT systems starting from using First, the sentences and phrases in the corpus are aligned across the two languages, and the pairs with high translation confidence are selected and stored in the translation memory. Then, translation examples are retrieved based on both the monolingual similarity and the translation confidence of the pair. Finally, these examples are combined to generate the translation. This paper is organized as follows. The next section presents how to build the translation memory from a content-aligned corpus. Section 3 describes our EBMT system, paying special attention to the selection of translation examples. Section 4 reports experimental results of word selection, Section 5 describes related works, and Section 6 gives our conclusions. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In EBMT, an input sentence can hardly be translated by a single translation example, except when an input is extremely short or is a typical domain-dependent sentence. Therefore, two or more translation examples are used to translate parts of the input and are then combined to generate a whole translation. Syntactic information is useful for composing example fragments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Translation Memory",
"sec_num": "2"
},
{
"text": "In this paper, we call a structurally aligned bilingual sentence pair a translation example or TE (Figure 1 ). This section presents our method for building TEs from a content-aligned corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 107,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Building Translation Memory",
"sec_num": "2"
},
{
"text": "Since the bilingual corpus used in our project does not contain literal translations, automatic parsing and alignment inevitably contain errors. Therefore, we selected highly likely TEs to make a translation memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Translation Memory",
"sec_num": "2"
},
{
"text": "We used a bilingual news corpus compiled by the NHK broadcasting service (NHK News Corpus), which consists of about 40,000 Japanese-English article pairs covering a five-year period. The average number of Japanese sentences in an article is 5.2, and that of English sentence is 7.4. Table 2 shows an example of an article pair.",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 290,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "NHK News Corpus",
"sec_num": "2.1"
},
{
"text": "As shown in Table 2 , an English article is not a literal translation of a Japanese article, although their contents are almost parallel.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "NHK News Corpus",
"sec_num": "2.1"
},
{
"text": "We used a DP matching for bilingual sentence alignment, where we allow the matching of 1-to-1, 1-to-2, 1-to-3, 2to-1 and 2-to-2 Japanese and English sentence pairs. This matching covered 84% of the following evaluation set. We selected 96 article pairs for the evaluation of sentence and phrase alignment, and we call this the evaluation set. We use the following score for matching, which is based on a ratio of corresponding content words (WCR: content Word Corresponding Ratio).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment",
"sec_num": "2.2"
},
{
"text": "\u00cf \u00cf \u2022 \u00cf (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCR",
"sec_num": null
},
{
"text": "where \u00cf is the number of Japanese content words in a unit, \u00cf is the number of English content words, and \u00cf is the number of content words whose translation is also in the unit, which is found by translation dictionaries",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCR",
"sec_num": null
},
{
"text": "We used the EDR electronic dictionary, EDICT, ENAMDICT, the ANCHOR translation dictionary, and the EIJIRO translation dictionary. These dictionaries have about two million entries in total.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCR",
"sec_num": null
},
{
"text": "On the evaluation data, the precision of the sentence alignment (defined as follows) was 60.7%. precision # of correct system outputs # of system outputs 2Among types of a corresponding unit, the precision of 1-to-1 correspondence was the best, at 77.5%. Since a 1to-1 correspondence is suitable for the following phrase alignment, we decided to use only the 1-to-1 correspondence results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCR",
"sec_num": null
},
{
"text": "The 1-to-1 sentence pairs obtained in the previous section are then aligned at phrase level by the method based on (Aramaki et al., 2001 ). The method consists of the following pre-process and two aligning steps. Pre-process: Conversion to phrasal dependency structures. First, the phrasal dependency structures of the sentence pair are estimated. The English parser returns a word-based phrase structure, which is merged into a phrase sequence by the following rules and converted into a dependency structure by lifting up head phrases. The Japanese parser outputs the phrasal dependency structure of an input, and that is used as is. We used The Japanese parser KNP (Kurohashi and Nagao, 1994) and The English nl-parser (Charniak, 2000) .",
"cite_spans": [
{
"start": 115,
"end": 136,
"text": "(Aramaki et al., 2001",
"ref_id": "BIBREF0"
},
{
"start": 668,
"end": 695,
"text": "(Kurohashi and Nagao, 1994)",
"ref_id": "BIBREF6"
},
{
"start": 722,
"end": 738,
"text": "(Charniak, 2000)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Alignment",
"sec_num": "2.3"
},
{
"text": "Step 1: Estimation of basic phrasal correspondences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Alignment",
"sec_num": "2.3"
},
{
"text": "We started with the word-level alignment to get the basic phrasal alignment. We used translation dictionaries for this process. The word sense ambiguity in the dictionaries is resolved with a heuristics that the most plausible correspondence is near other correspondences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Alignment",
"sec_num": "2.3"
},
{
"text": "Step 2: Expansion of phrasal correspondences. Finally, the remaining phrases, which were not handled in the step 1, are merged into a neighboring phrase correspondence or are used to establish a new correspondence, depending on the surrounding existing correspondences. Figure 3 shows an example of a new correspondence established by a structural pattern.",
"cite_spans": [],
"ref_spans": [
{
"start": 270,
"end": 278,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Phrase Alignment",
"sec_num": "2.3"
},
{
"text": "These procedures can detect the phrasal alignments in a pair of sentences as shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 94,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Phrase Alignment",
"sec_num": "2.3"
},
{
"text": "For phrase alignment evaluation, we selected all of the 145 sentence pairs that had 1-to-1 correspondences form the evaluation set and gave correct content word correspondences to these pairs. The phrase correspondences detected by the system were judged correct when the correspondences include the manually given content word correspondences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Alignment",
"sec_num": "2.3"
},
{
"text": "Based on this criterion, the precision of phrase alignment was 50%. Then, we found a correlation between the phrase alignment precision and WCR of parallel sentences as shown in Figure 4 . Furthermore, the precision of sentence alignment and WCR also have a correlation. Since their performances nearly reaches their limits when WCR is 0.3, we decided to use parallel sentences whose WCR is 0.3 or greater as TEs. ",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 186,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Phrase Alignment",
"sec_num": "2.3"
},
{
"text": "As explained in the preceding sections, among sentencealigned and phrase-aligned NHK News articles, TEs with a 1-to-1 sentence correspondence and whose WCR is 0.3 or greater are registered in the translation memory. Table 1 shows the number of TEs for each WCR range.",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 224,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Building Translation Memory",
"sec_num": "2.4"
},
{
"text": "In addition, the Bilingual White Paper and Translation Memory of SENSEVAL2 (Kurohashi, 2001) were also phrase-aligned and registered in the translation memory. Sentence alignments are already given for these corpora. Since their parallelism are fairly high and the accuracies of their phrase alignments are more than 70%, we utilized all phrase-aligned sentence pairs as TEs (Table 1) .",
"cite_spans": [
{
"start": 75,
"end": 92,
"text": "(Kurohashi, 2001)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 375,
"end": 384,
"text": "(Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Building Translation Memory",
"sec_num": "2.4"
},
{
"text": "Our EBMT system translates a Japanese sentence into English. A Japanese input sentence is parsed and transformed into a phrase-based dependency structure. Then, for each phrase, an appropriate TE is retrieved from the translation memory that is most suitable for translating Figure 6 : Selection of a TE. the phrase (and its neighboring phrases). Finally, the English expressions of the TEs are combined to produce the final English translation ( Figure 5 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 275,
"end": 283,
"text": "Figure 6",
"ref_id": null
},
{
"start": 447,
"end": 455,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "EBMT System",
"sec_num": "3"
},
{
"text": "This section describes our EBMT system, mainly the TE selection part.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EBMT System",
"sec_num": "3"
},
{
"text": "The basic idea of TE selection is shown in Figure 6 . When a part of the input sentence and a part of the TE source language sentence have an equal expression, the part of the input sentence is called I and the part of the TE source language sentence is called S. A part of the TE target language corresponding to S is called T. The pair S and T is called fragment of TE (FTE).",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 51,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Basic Idea of TE Selection",
"sec_num": "3.1"
},
{
"text": "I, S and T have to meet the following conditions, as a natural consequence of the fact that S-T is used for translating I.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Idea of TE Selection",
"sec_num": "3.1"
},
{
"text": "1. I, S and T are each structurally connected phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Idea of TE Selection",
"sec_num": "3.1"
},
{
"text": "2. I is equal to S except for function words at the boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Idea of TE Selection",
"sec_num": "3.1"
},
{
"text": "3. S corresponds to T completely, that is, all phrases in S and T are aligned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Idea of TE Selection",
"sec_num": "3.1"
},
{
"text": "It might be the case that for an I, two or more FTEs that meet the above conditions exist in the translation memory. Our method takes into account the following relations among I-S-T to select the best FTE:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Idea of TE Selection",
"sec_num": "3.1"
},
{
"text": "1. The largest pair of I and S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Idea of TE Selection",
"sec_num": "3.1"
},
{
"text": "2. The similarity between the surroundings of I and these of S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Idea of TE Selection",
"sec_num": "3.1"
},
{
"text": "The following sections concretely present how to calculate these criteria. For simplicity of explanation, we call a set of phrasal correspondences between S and T, EQ; that neighboring EQ, CONTEXT; that between S and T, ALIGN (Figure 6 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 235,
"text": "(Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "The confidence of alignment between S and T.",
"sec_num": "3."
},
{
"text": "The equality between I and S is a sum of the equality score of each phrase correspondence in EQ, which is calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity between Japanese Expressions",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "EQUAL\u00b4 \u00b5 \u00c8 \u00cb \u00d3\u00d2\u00d8 \u00a2 \u00be \u00d3\u00d2\u00d8 \u2022 \u00bc \u00be \u00a2 \u00c8 \u00cb \u00d9\u00d2 \u00a2 \u00be \u00d9 \u00d2",
"eq_num": "(3)"
}
],
"section": "Monolingual Similarity between Japanese Expressions",
"sec_num": "3.2"
},
{
"text": "where \u00d3\u00d2\u00d8 is the number of content words in the phrase correspondence, \u00d9\u00d2 is the number of function words, \u00cb \u00d3\u00d2\u00d8 is the equality between content words, and \u00cb \u00d9 \u00d2 is the equality between function words. \u00cb \u00d3\u00d2\u00d8 and \u00cb \u00d9 \u00d2 are given in Table 2 . 1 Usually, the equality score between I and S is equal to the number of phrases in I (the number of phrase correspondences in EQ), but sometimes these are slightly different, depending on the conjugation type and function words.",
"cite_spans": [
{
"start": 241,
"end": 242,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 231,
"end": 238,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Monolingual Similarity between Japanese Expressions",
"sec_num": "3.2"
},
{
"text": "On the other hand, the similarity between the surroundings of I and those of S is a sum of the similarity score of each phrase correspondence in CONTEXT, which is calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity between Japanese Expressions",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "SIM\u00b4 \u00b5 \u00cb \u00d3\u00d2\u00d8 \u00a2 \u00be \u00d3\u00d2\u00d8 \u2022\u00bc \u00be\u00a2 \u00c8 \u00cb \u00d9\u00d2 \u00a2 \u00be \u00d9\u00d2 \u00a2\u00cb \u00d3\u00d2\u00d2 \u00d8",
"eq_num": "(4)"
}
],
"section": "Monolingual Similarity between Japanese Expressions",
"sec_num": "3.2"
},
{
"text": "Basically the calculation of SIM and EQUAL is the same, except that SIM considers the relation type between the phrase in I and its outer phrase by \u00cb \u00d3\u00d2\u00d2 \u00d8 . When the relation is the same, the influence of the surrounding phrases must be large, so \u00cb \u00d3\u00d2\u00d2 \u00d8 is set to 1.0; when the relation is not the same, \u00cb \u00d3\u00d2\u00d2 \u00d8 is set to 0.5. The relations between phases are estimated by the function word or conjugation type of the dependent phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity between Japanese Expressions",
"sec_num": "3.2"
},
{
"text": "The monolingual similarity between Japanese expressions I and S is calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity between Japanese Expressions",
"sec_num": "3.2"
},
{
"text": "\u00be \u00c9 EQUAL\u00b4 \u00b5 \u2022 \u00be \u00c7 AE \u00cc \u00cc SIM\u00b4 \u00b5 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity between Japanese Expressions",
"sec_num": "3.2"
},
{
"text": "The translation confidence of phrase alignment between S and T is the sum of the confidence score of each phrase correspondence in ALIGN, CONF( ) in Table 2 , and it is weighted by the WCR of the parallel sentences. As a final measure, the score of I-S-T is calculated as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 156,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Translation Confidence of Japanese-to-English Alignment",
"sec_num": "3.3"
},
{
"text": "\u00d2 \u00be \u00c9 EQUAL\u00b4 \u00b5 \u2022 \u00be \u00c7 AE \u00cc \u00cc SIM\u00b4 \u00b5 \u00d3 \u00a2 \u00d2 \u00be \u00c4\u00c1 AE CONF\u00b4 \u00b5 \u00d3 \u00a2 WCR (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Confidence of Japanese-to-English Alignment",
"sec_num": "3.3"
},
{
"text": "For each phrase (P) in an input sentence, the most plausible FTE is retrieved by the following algorithm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Algorithm of FTE",
"sec_num": "3.4"
},
{
"text": "1. FTEs are retrieved from the translation memory, in which a Japanese phrase matches P, and it is aligned to an English phrase. (that is, these are FTEs that meet the basic conditions for translation in Section 3.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Algorithm of FTE",
"sec_num": "3.4"
},
{
"text": "2. For each FTE obtained in the previous step, it is checked whether the surrounding phrase of P and that of FTE are the same or similar, phrase by phrase, and the largest I-S-T that meets the basic conditions is detected. * \u00cb\u00d2\u00d8\u00d8 is a similarity calculated based on NTT thesaurus (Ikehara et al., 1997 ) (max = 1).",
"cite_spans": [
{
"start": 280,
"end": 301,
"text": "(Ikehara et al., 1997",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Search Algorithm of FTE",
"sec_num": "3.4"
},
{
"text": "\u00cb \u00d9 \u00d2 1.0 stem match 0 otherwise 1.0 all content words in alignment correspond to each other in dic CONF( ) 0.8 some content words in alignment correspond to each other in dic 0.5 otherwise 3. The score of each I-S-T is calculated, and the best I-S-T (S-T is the FTE) is selected as the FTE for P.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "exact match",
"sec_num": "1.1"
},
{
"text": "As a result of detecting FTEs for phrases in the input, two FTEs starting from the different phrase might overlap each other. In such a case, we employed a greedy search algorithm that adopts the higher score FTE one by one; therefore, each previously adopted FTE is only partly used for translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "exact match",
"sec_num": "1.1"
},
{
"text": "On the other hand, when no FTE is obtained for an input phrase, a translation dictionary is utilized (when the phrase contains two or more content words, the longest matching strategy is used for dictionary look-up). When two or more possible translations are given from the dictionary, the most frequent phrase/word in the NHK News Corpus is adopted. Figure 5 shows examples of FTEs detected by our method. 2",
"cite_spans": [],
"ref_spans": [
{
"start": 352,
"end": 360,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "exact match",
"sec_num": "1.1"
},
{
"text": "The English expressions in the selected FTEs are combined, and the English dependency structure is constructed. The dependency relations in FTEs are preserved, and the relation between the two FTEs is estimated based on the relation of the input sentences. Figure 5 shows an example of a combined English dependency structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 266,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Generating a Target Sentence",
"sec_num": "3.5"
},
{
"text": "When a surface expression is generated from its dependency structure, its word order must be selected properly. This can be done by preserving the word order in FTEs and by ordering FTEs by a set of rules governing both the dependency relation and the word-order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating a Target Sentence",
"sec_num": "3.5"
},
{
"text": "The module for controlling conjugation, determiner, and singular/plural is not yet implemented in our current MT system. 2 As the bottom example in Figure 5 shows, EBMT can easily handle head-switching translation by using an FTE that contains all of the head-switching phenomena in it.",
"cite_spans": [
{
"start": 121,
"end": 122,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Generating a Target Sentence",
"sec_num": "3.5"
},
{
"text": "For evaluation, we selected 50 sentence pairs from the NHK News Corpus that were not used for the translation memory. Their source (Japanese) sentences were translated by our EBMT system, and the selected FTEs were evaluated by hand, referring to the target (English) sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "A phrase by phrase evaluation was done to judge whether the English expression of the selected FTE was good or bad. The accuracy was 85.0%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In order to investigate the effectiveness of each component of FTE selection, we compared the following four methods:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "1. EQCONTEXTALIGN: The proposed method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "2. EQALIGN: FTE score is calculated as follows, without the CONTEXT similarity:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u00be \u00c9 EQUAL( ) \u00a2 \u00be \u00c4\u00c1 AE CONF( ) \u00a2 WCR (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "3. EQCONTEXT: FTE score is calculated as follows, without the ALIGN confidence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00be \u00c9 EQUAL( ) \u2022 \u00be \u00c7 AE \u00cc \u00cc SIM( )",
"eq_num": "(8)"
}
],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "4. DICONLY: Word selection is based only on dictionaries and frequency in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The accuracy of each method is shown in Table 3 , and the results indicate that the proposed method, EQ-CONTEXTALIGN, is the best, that is, using context similarity and align confidence works effectively. shows examples of EQCONTEXTALIGN and DICONLY. EQCONTEXTALIGN usually selects appropriate words, compared to DICONLY. When there are no plausible translation examples in the translation memory, the system selects a low-similarity or low-confidence FTE. However we believe this problem will be resolved as the number of translation examples increases, since the News Corpus is increasing day by day.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The idea of example based machine translation systems was first proposed by (Nagao, 1984) , and preliminary systems that appeared about ten years (Sato and Nagao, 1990; Sadler and Vendelmans, 1990; Maruyama and Watanabe, 1992; Furuse and Iida, 1994) showed the basic feasibility of the idea.",
"cite_spans": [
{
"start": 76,
"end": 89,
"text": "(Nagao, 1984)",
"ref_id": "BIBREF10"
},
{
"start": 146,
"end": 168,
"text": "(Sato and Nagao, 1990;",
"ref_id": "BIBREF13"
},
{
"start": 169,
"end": 197,
"text": "Sadler and Vendelmans, 1990;",
"ref_id": "BIBREF12"
},
{
"start": 198,
"end": 226,
"text": "Maruyama and Watanabe, 1992;",
"ref_id": "BIBREF8"
},
{
"start": 227,
"end": 249,
"text": "Furuse and Iida, 1994)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Recent studies have focused on the practical aspects of EBMT, and this technology has even been applied to some restricted domains. The work in ) addressed the problem of technical manual translation in several languages, and the work of (Imamura, 2002) dealt with dialogues translation in the travel arrangement domain. These works select the translation example pairs based solely on the source language similarity. We believe this is partly due to the high parallelism found in their corpora.",
"cite_spans": [
{
"start": 238,
"end": 253,
"text": "(Imamura, 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Our work targets a more general corpus of wider coverage, i.e., the broadcast news collection. Generally available corpora like the one we use tend to be more freely translated and suffer from lower parallelism. This compelled us to use the criterion of translation confidence, together with the criterion of monolingual similarity used in the previous works. As we showed in this paper, this metric succeeded in meeting our expectations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper, we described operations of the entire EBMT process while using a content-aligned corpus, i.e., the NHK Broadcast Corpus. In this process, one of the key problems is how to select plausible translation examples. We proposed a new method to select translation examples based on source language similarity and translation confidence. In the word selection task, the performance is highly accurate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "All constant values inTable 2and formulas were decided based on preliminary experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by the 21st Century COE program \"Information Science and Technology Strategic Core\" at University of Tokyo and by a contract with the Telecommunications Advancement Organization of Japan, entitled \"A study of speech dialogue translation technology based on a large corpus\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Finding translation correspondences from parallel parsed corpus for example-based translation",
"authors": [
{
"first": "Eiji",
"middle": [],
"last": "Aramaki",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Hideo",
"middle": [],
"last": "Watanabe",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of MT Summit VIII",
"volume": "",
"issue": "",
"pages": "27--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eiji Aramaki, Sadao Kurohashi, Satoshi Sato, and Hideo Watanabe. 2001. Finding translation correspondences from parallel parsed corpus for example-based transla- tion. In Proceedings of MT Summit VIII, pages 27-32.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A maximum-entropy-inspired parser",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of NAACL 2000",
"volume": "",
"issue": "",
"pages": "132--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In In Proceedings of NAACL 2000, pages 132- 139.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Constituent boundary parsing for example-based machine translation",
"authors": [
{
"first": "Osamu",
"middle": [],
"last": "Furuse",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Iida",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th COLING",
"volume": "",
"issue": "",
"pages": "105--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Osamu Furuse and Hitoshi Iida. 1994. Constituent boundary parsing for example-based machine transla- tion. In Proceedings of the 15th COLING, pages 105- 111.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Application of translation knowledgeacquired by hierarchical phrase alignment for pattern-based mt",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Imamura",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of TMI-2002",
"volume": "",
"issue": "",
"pages": "74--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Imamura. 2002. Application of translation knowl- edgeacquired by hierarchical phrase alignment for pattern-based mt. In Proceedings of TMI-2002, pages 74-84.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A syntactic analysis method of long Japanese sentences based on the detection of conjunctive structures",
"authors": [
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadao Kurohashi and Makoto Nagao. 1994. A syntactic analysis method of long Japanese sentences based on the detection of conjunctive structures. Computational Linguistics, 20(4).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Senseval2 Japanese translation task",
"authors": [
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of SENSEVAL2",
"volume": "",
"issue": "",
"pages": "37--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadao Kurohashi. 2001. Senseval2 Japanese translation task. In Proceedings of SENSEVAL2, pages 37-40.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The cover search algorithm for example-based translation",
"authors": [
{
"first": "Hiroshi",
"middle": [],
"last": "Maruyama",
"suffix": ""
},
{
"first": "Hideo",
"middle": [],
"last": "Watanabe",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of TMI-1992",
"volume": "",
"issue": "",
"pages": "173--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroshi Maruyama and Hideo Watanabe. 1992. The cover search algorithm for example-based translation. In Proceedings of TMI-1992, pages 173-184.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A bestfirst alignment algorithm for automatic extraction of transfer mappings from bilingual corpora",
"authors": [
{
"first": "Arul",
"middle": [],
"last": "Menezes",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the ACL 2001 Workshop on Data-Driven Methods in Machine Translation",
"volume": "",
"issue": "",
"pages": "39--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arul Menezes and Stephen D. Richardson. 2001. A best- first alignment algorithm for automatic extraction of transfer mappings from bilingual corpora. In Proceed- ings of the ACL 2001 Workshop on Data-Driven Meth- ods in Machine Translation, pages 39-46.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A framework of a mechanical translation between Japanese and english by analogy principle",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1984,
"venue": "Artificial and Human Intelligence",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makoto Nagao. 1984. A framework of a mechanical translation between Japanese and english by analogy principle. In In Artificial and Human Intelligence, pages 173-180.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Overcoming the customization bottleneck using examplebased mt",
"authors": [
{
"first": "D",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "William",
"middle": [
"B"
],
"last": "Richardson",
"suffix": ""
},
{
"first": "Arul",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Monica",
"middle": [],
"last": "Menezes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Corston-Oliver",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the ACL 2001 Workshop on Data-Driven Methods in Machine Translation",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen D. Richardson, William B. Dolan, Arul Menezes, and Monica Corston-Oliver. 2001. Over- coming the customization bottleneck using example- based mt. In Proceedings of the ACL 2001 Work- shop on Data-Driven Methods in Machine Translation, pages 9-16.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Pilot implementation of a bilingual knowledge bank",
"authors": [
{
"first": "V",
"middle": [],
"last": "Sadler",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vendelmans",
"suffix": ""
}
],
"year": 1990,
"venue": "Proeedings of the 13th COLING",
"volume": "",
"issue": "",
"pages": "449--451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Sadler and R. Vendelmans. 1990. Pilot implementa- tion of a bilingual knowledge bank. In Proeedings of the 13th COLING, pages 449-451.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Toward memorybased translation",
"authors": [
{
"first": "Satoshi",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 13th COLING",
"volume": "",
"issue": "",
"pages": "247--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satoshi Sato and Makoto Nagao. 1990. Toward memory- based translation. In Proceedings of the 13th COLING, pages 247-252.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Translation Example (TE). a content-aligned corpus, i.e., a bilingual broadcast news corpus."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Underlined phrases and sentences have no parallel expressions in the other language."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "NHK News Corpus."
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Handling of Remaining Phrases."
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "WCR and Precision."
},
"FIGREF5": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Example of Translation."
},
"FIGREF6": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Figure 7"
},
"FIGREF7": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Word Selection by EQCONTEXTALIGN and DICONLY."
},
"TABREF0": {
"text": "Number of TEs.",
"num": null,
"html": null,
"content": "<table><tr><td>Corpus</td><td>WCR</td><td># of TEs</td></tr><tr><td/><td>0.3-0.4</td><td>18290</td></tr><tr><td>NHK News</td><td>0.4-0.5</td><td>6975</td></tr><tr><td/><td>0.5-</td><td>2314</td></tr><tr><td>White Paper</td><td>-</td><td>2225</td></tr><tr><td>SENSEVAL</td><td>-</td><td>6920</td></tr><tr><td colspan=\"3\">1. Function words are grouped with the following</td></tr><tr><td>content word.</td><td/><td/></tr><tr><td colspan=\"3\">2. Adjoining nouns are grouped into one phrase.</td></tr><tr><td colspan=\"3\">3. Auxiliary verbs are grouped with the following</td></tr><tr><td>verb.</td><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF1": {
"text": "Parameters for Similarity and Confidence Calculation.",
"num": null,
"html": null,
"content": "<table><tr><td/><td>1.1</td><td>exact match</td></tr><tr><td>\u00cb \u00d3\u00d2\u00d8</td><td colspan=\"2\">1.0 0.5 \u00a2 \u00cb\u00d2\u00d8\u00d8 + 0.3 thesaurus match stem match</td></tr><tr><td/><td>0.3</td><td>POS match</td></tr><tr><td/><td>0</td><td>otherwise</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"text": "Experimental Results.",
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">Good Bad Accuracy</td></tr><tr><td>EQCONTEXTALIGN</td><td>268</td><td>47</td><td>85.0%</td></tr><tr><td/><td colspan=\"2\">(246) (35)</td><td>(87.5%)</td></tr><tr><td>EQALIGN</td><td>254</td><td>61</td><td>80.6 %</td></tr><tr><td/><td colspan=\"2\">(233) (48)</td><td>(82.9%)</td></tr><tr><td>EQCONTEXT</td><td>234</td><td>80</td><td>74.2%</td></tr><tr><td/><td colspan=\"2\">(213) (68)</td><td>(75.8%)</td></tr><tr><td>DICONLY</td><td>232</td><td>83</td><td>73.6%</td></tr><tr><td colspan=\"4\">* Values in brackets indicate the accuracy only for FTEs,</td></tr><tr><td colspan=\"4\">excluding cases in which the dictionary was used as a</td></tr><tr><td>backup.</td><td/><td/><td/></tr></table>",
"type_str": "table"
}
}
}
}