Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E03-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:24:41.176171Z"
},
"title": "Applications of Automatic Evaluation Methods to Measuring a Capability of Speech Translation System",
"authors": [
{
"first": "Keiji",
"middle": [],
"last": "Yasuda",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Sugaya",
"middle": [],
"last": "Fumiaki",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "R&d",
"middle": [],
"last": "Kddi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Laboratorie",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Ohara",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Toshiyuki",
"middle": [],
"last": "Takezawa",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Seiichi",
"middle": [],
"last": "Yamamoto",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Masuzo",
"middle": [],
"last": "Yanagida",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The main goal of this paper is to propose automatic schemes for the translation paired comparison method. This method was proposed to precisely evaluate a speech translation system's capability. Furthermore, the method gives an objective evaluation result, i.e., a score of the Test of English for International Communication (TOEIC). The TOEIC score is used as a measure of one's speech translation capability. However, this method requires tremendous evaluation costs. Accordingly, automatization of this method is an important subject for study. In the proposed method, currently available automatic evaluation methods are applied to automate the translation paired comparison method. In the experiments, several automatic evaluation methods (BLEU, NIST, DPbased method) are applied. The experimental results of these automatic measures show a good correlation with evaluation results of the translation paired comparison method.",
"pdf_parse": {
"paper_id": "E03-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "The main goal of this paper is to propose automatic schemes for the translation paired comparison method. This method was proposed to precisely evaluate a speech translation system's capability. Furthermore, the method gives an objective evaluation result, i.e., a score of the Test of English for International Communication (TOEIC). The TOEIC score is used as a measure of one's speech translation capability. However, this method requires tremendous evaluation costs. Accordingly, automatization of this method is an important subject for study. In the proposed method, currently available automatic evaluation methods are applied to automate the translation paired comparison method. In the experiments, several automatic evaluation methods (BLEU, NIST, DPbased method) are applied. The experimental results of these automatic measures show a good correlation with evaluation results of the translation paired comparison method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "ATR Interpreting Telecommunications Research Laboratories (ATR-ITL) developed the ATR-MATRIX (ATR's Multilingual Automatic Translation System for Information Exchange) speech translation system (Takezawa et al., 1998) , which translates both ways between English and Japanese. ATR-ITL has also been carrying out comprehensive evaluations of this system through dialog tests and analyses and has shown the effectiveness of the system for basic travel conversation .",
"cite_spans": [
{
"start": 194,
"end": 217,
"text": "(Takezawa et al., 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These experiences, however, indicated that it would be difficult to enlarge the evaluation target domain/task by simply adopting the dialog tests which is employed in the same way for ATR-MATRIX. Additional measures would be neces- Figure 1: Diagram of translation paired comparison method sary in the design of an expanded system in order to meet performance expectations. Sugaya et al. (2000) proposed the translation paired comparison method, which is applicable to precise evaluation of speech translation systems with a limited task/domain capability. A major disadvantage of the translation paired comparison method is its subjective approach to evaluation. Such an approach requires large costs and a long evaluation time. Therefore, automatization of this method remains an important issue to solve.",
"cite_spans": [
{
"start": 374,
"end": 394,
"text": "Sugaya et al. (2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several automatic evaluation methods have been proposed to achieve efficient development of MT technology, (Su et al., 1992; Papineni et al., 2002; NIST, 2002) . Both subjective and automatic evaluation methods are useful for making comparisons among different schemes or systems. However, these techniques are unable to objectively measure the performance of practical target application systems.",
"cite_spans": [
{
"start": 107,
"end": 124,
"text": "(Su et al., 1992;",
"ref_id": "BIBREF2"
},
{
"start": 125,
"end": 147,
"text": "Papineni et al., 2002;",
"ref_id": "BIBREF1"
},
{
"start": 148,
"end": 159,
"text": "NIST, 2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose an automatization scheme for the translation paired comparison method that employs available automatic evaluation methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 2 explains the translation paired comparison method, and Section 3 introduces the proposed evaluation scheme. Section 4 describes several automatic evaluation methods applied to the proposed method. Section 5 presents the evaluation results obtained by the proposed methods. Section 6 presents our conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The translation paired comparison method can precisely measure the capability of a speech translation system. A brief description of the method is given in this section. Figure 1 shows a diagram of the translation paired comparison method in the case of Japanese to English translation. The Japanese nativespeaking examinees are asked to listen to spoken Japanese text and then write its English translation on paper. The Japanese text is presented twice within one minute, with a pause between the presentations. To measure the English capability of the Japanese native speakers, the TOEIC score (TOEIC, 2002) is used. The examinees are asked to present an official TOEIC score certificate confirming that they have officially taken the test within the past six months.",
"cite_spans": [
{
"start": 597,
"end": 610,
"text": "(TOEIC, 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 170,
"end": 178,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Translation Paired Comparison Method",
"sec_num": "2"
},
{
"text": "In the translation paired comparison method, the translations by the examinees and the outputs of the system are printed in rows together with the original Japanese text to form evaluation sheets for comparison by an evaluator, who is a bilingual speaker of English and Japanese. Each transcribed utterance on the evaluation sheets is represented by the Japanese test text and the two translation results (i.e., translations by an examinee and by the system).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Paired Comparison Method",
"sec_num": "2"
},
{
"text": "The evaluator is asked to follow the procedure depicted in Figure 2 . The meanings of ranks in the figure are as follows: (A) Perfect: no problem in both information and grammar; (B) Fair: easyto-understand with some unimportant information missing or flawed grammar; (C) Acceptable: broken but understandable with effort; (D) Nonsense: important information has been translated incorrectly.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Translation Paired Comparison Method",
"sec_num": "2"
},
{
"text": "In the evaluation process, the human evaluator ignores misspellings because the capability to be measured is not English writing but speech translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Paired Comparison Method",
"sec_num": "2"
},
{
"text": "From the scores based on these rankings, either the examinee or the system is considered the \"winner\" for each utterance. If the ranking and the naturalness are the same for an utterance, the competition is considered \"even\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Paired Comparison Method",
"sec_num": "2"
},
{
"text": "To prepare the regression analysis, the number of \"even\" utterances are divided in half and equally assigned as system-won utterances and human-won utterances. Accordingly, we define the human winning rate (/17H) by the following equation: where Ntotai denotes the total number of utterances in the test set, Nh\" represents the number of human-won utterances, and N even indicates the number of even (non-winner) utterances, i.e., no quality difference between the results of the TDMT and humans. Details of the regression analysis are given in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Paired Comparison Method",
"sec_num": "2"
},
{
"text": "WH -(Nhuman -0.5 x Nevem) I Ntotal (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Paired Comparison Method",
"sec_num": "2"
},
{
"text": "The first point to explain is how to automatize the translation paired comparison method. The basic idea of the proposed method is to substitute the human evaluation process of the translation paired comparison method with an automatic evaluation The unit of utterance corresponds to the unit of segment in BLEU and NIST. Similarly, the unit of the test set corresponds to the unit of document or system in BLEU and NIST.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "The utterance unit evaluation takes roughly the same procedure as the translation paired comparison method. Figure 3 shows the points of difference between the translation paired comparison method and the utterance unit evaluation of the proposed method. The complete flow can be obtained by substituting Figure 3 for the broken line area of Figure 2 . In the regression analysis of the utterance unit evaluation, the same procedure as the original translation paired comparison method is carried out.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 116,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 305,
"end": 313,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 342,
"end": 350,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Utterance Unit Evaluation",
"sec_num": "3.1"
},
{
"text": "In a sense, the test set unit evaluation follows a different procedure from the translation paired comparison method and the utterance unit evaluation. The flow of the test set unit evaluation is shown in Figure 4 . In the regression analysis of the test set unit evaluation, the evaluation result by an automatic evaluation method is used instead of IVH. Papineni et al. (2002) proposed BLEU, which is an automatic method for evaluating MT quality using N-gram matching. The National Institute of Standards and Technology also proposed an automatic evaluation method called NIST 2002, which is a modified method of BLEU. Equation 3 is the BLEU score formulation, and Equation 4 is the NIST score formulation.",
"cite_spans": [
{
"start": 356,
"end": 378,
"text": "Papineni et al. (2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 205,
"end": 213,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test Set Unit Evaluation",
"sec_num": "3.2"
},
{
"text": "SBLEU - exp E w\" log(p) -max ( re j 1, 0) } L* L N sys n=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Based Method",
"sec_num": "4.2"
},
{
"text": "Figure 4: Procedure of Test Set Unit Evaluation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Based Method",
"sec_num": "4.2"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Based Method",
"sec_num": "4.2"
},
{
"text": "In this section, we briefly describe the automatic evaluation methods that are applied to the proposed method. Basically, these methods are based on the same idea, that is, to compare the target translation for evaluation to high-quality human reference translations. These methods, then, require a corpus of high-quality human reference translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation Method",
"sec_num": "4"
},
{
"text": "The DP score between a translation output and references can be calculated by DP matching (Su et al., 1992; as follows:",
"cite_spans": [
{
"start": 90,
"end": 107,
"text": "(Su et al., 1992;",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DP-based Method",
"sec_num": "4.1"
},
{
"text": "=1 to all references 1. max f ---Di",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DP-based Method",
"sec_num": "4.1"
},
{
"text": "(2) where SDP is the DP score, Ti is the total number of words in reference i, Si is the number of substitution words for comparing reference i to the translation output, /i is the number of inserted words for comparing reference i to the translation output, and Di is the number of deleted words for comparing reference i to the translation output. For the test set unit evaluation using the DP score, we employ the utterance-weighted average of utterance-level scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DP-based Method",
"sec_num": "4.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DP-based Method",
"sec_num": "4.1"
},
{
"text": "Pit = counteiip (n-grain) ECE { Candidates } C ounten -gram) EcE{Candidates} En-gramE{C} wn = N -1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DP-based Method",
"sec_num": "4.1"
},
{
"text": "and L* f = the number of words in the reference re translation that is closest in length to the translation being scored L525 = the number of words in the translation being scored and 13 is chosen to make the brevity penalty fac-tor=0.5 when the number of words in the system translation is 2/3 of the average number of words in the reference translation. For Equations 3 and 4, N indicates the maximum n-gram length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DP-based Method",
"sec_num": "4.1"
},
{
"text": "SNI ST - x exp {,3 log2 [min L\" f ' (Ls -1)1} (4) where in/0(w' w\") =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DP-based Method",
"sec_num": "4.1"
},
{
"text": "In this section, we show experimental results of the original translation paired comparison method and the proposed method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Experiments",
"sec_num": "5"
},
{
"text": "The target system to be evaluated is Transfer Driven Machine Translation (TDMT) (Takezawa et al., 1998) . TDMT is a language translation subsystem of the Japanese-to-English speech translation system ATR-MATRIX. For evaluation of TDMT, the input included accurate transcriptions.",
"cite_spans": [
{
"start": 80,
"end": 103,
"text": "(Takezawa et al., 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Conditions",
"sec_num": "5.1"
},
{
"text": "The total number of examinees is 29, and the range of their TOEIC score is between the 300s and 800s. Excepting the 600s, every hundredpoint range has 5 examinees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Conditions",
"sec_num": "5.1"
},
{
"text": "The test set consists of 330 utterances in 23 conversations from the ATR bilingual travel conversation database (Takezawa, 1999) . Consequently, this test set has different features from written language. Most of the utterances in our task contain fewer words than the unit of segment used so far in research with BLEU and NIST. One utterance contains 11.9 words on average. The standard deviation of the number of words is 6.5. The shortest utterance consists of 1 word, and the longest consists of 32 words. This test set was not used to train the TDMT system.",
"cite_spans": [
{
"start": 112,
"end": 128,
"text": "(Takezawa, 1999)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Conditions",
"sec_num": "5.1"
},
{
"text": "For the translations of examinees, all misspellings were corrected by humans because, as mentioned in Section 2, the human evaluator ignores misspellings in the original translation paired comparison method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Conditions",
"sec_num": "5.1"
},
{
"text": "Comparison Method Figure 5 shows the results of a comparison between TDMT and the examinees. Here, the abscissa represents the TOEIC score, and the ordinate represents WH. In this figure, the straight line indicates the regression line. The capabilitybalanced point between the TDMT subsystem and . Table 1 : Detailed results of utterance unit evaluation the examinees was determined to be the point at which the regression line crossed half the total number of test utterances, i.e., WH of 0.5. In Figure 5, this point is 705. Consequently, the translation capability of the language translation system equals that of an examinee with a score of around 700 points on the TOEIC. We call this point the system's TOEIC score.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 5",
"ref_id": null
},
{
"start": 299,
"end": 306,
"text": "Table 1",
"ref_id": null
},
{
"start": 499,
"end": 505,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Results by Translation Paired",
"sec_num": "5.2"
},
{
"text": "In their original forms, the maximum n-gram length for BLEU (N in Equation 3) is set at 4 and that for NIST (N in Equation 4) is set at 5. These settings were established for evaluation of written language. However, utterances in our test set contain fewer words than in typical written language. Consequently, for the utterance unit evaluation, we conducted several experiments while varying N from 1 to 4 for BLEU and from 1 to 5 for NIST. Table 1 shows the detailed results of the paired comparison using automatic evaluations. Figure 6 shows experimental results of the utterance unit -11 11\u2022IMMMMIMMI-1 II II II IMM 1 II II II MIMI II 1 I evaluation. In this figure, the abscissa represents the automatic evaluation method used and the ngram length, and the ordinate represents the correct ratio (Reo\",t) calculated by the following equation:",
"cite_spans": [],
"ref_spans": [
{
"start": 442,
"end": 449,
"text": "Table 1",
"ref_id": null
},
{
"start": 531,
"end": 539,
"text": "Figure 6",
"ref_id": null
},
{
"start": 589,
"end": 646,
"text": "-11 11\u2022IMMMMIMMI-1 II II II IMM 1 II II II MIMI II 1 I",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Results of Utterance Unit Evaluation",
"sec_num": "5.3"
},
{
"text": "CO V) 23 -E -E- --E- --E- !7.5 35f Ui CO v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results of Utterance Unit Evaluation",
"sec_num": "5.3"
},
{
"text": "Reorrect -Ucorrect I Utotul (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results of Utterance Unit Evaluation",
"sec_num": "5.3"
},
{
"text": "where Utota i is the total number of translation pairs consisting of the examinees' translation and the system's translation (330 utterances x 29 examinees = 9570 pairs) and Ue0\"ect is the number of pairs where the automatic evaluation gives the same evaluation result as that of the human evaluator. The difference between Figures 6 and 7 is the number of references to be used for automatic evaluation. In Figure 6 , there is 1 reference per utterance, while in Figure 7 there are 16 references per utterance. In these figures, values in parentheses under the abscissa indicate the maximum n-gram length. Looking at these figures, the correct ratio of BLEU changes value depending on the maximum n-gram length. The maximum n-gram length of 1 or 2 yields a high correct ratio, and that of 3 or 4 yields a low correct ratio. On the other hand, the correct ratio of NIST is not influenced by the maximum n-gram length. It seems reasonable to suppose that these phenomena are due to computation of the mean of n-gram matching. As shown in Equations 3 and 4, BLEU applies a geometric mean and NIST applies an information-weighted arithmetic mean. Computation of the geometric mean yields 0 when one of the factors is 0, i.e., the BLEU score takes 0 for all of the utterances whose word count is less than the maximum ngram length.",
"cite_spans": [],
"ref_spans": [
{
"start": 324,
"end": 339,
"text": "Figures 6 and 7",
"ref_id": null
},
{
"start": 408,
"end": 416,
"text": "Figure 6",
"ref_id": null
},
{
"start": 464,
"end": 472,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Results of Utterance Unit Evaluation",
"sec_num": "5.3"
},
{
"text": "The correct ratio shown in Figures 6 and 7 is low, i.e., around 0.5. Thus, even state-of-theart technology is insufficient to determine better translation in the utterance unit evaluation. For a sufficient result of the utterance unit evaluation, we need a more precise automatic evaluation method or another scheme, for example, majority decision using multiple automatic evaluation methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 42,
"text": "Figures 6 and 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Results of Utterance Unit Evaluation",
"sec_num": "5.3"
},
{
"text": "In the original BLEU or NIST formulation of the test set unit (or document or system level) evaluation, n-gram matches are computed at the utterance level, but the mean of n-gram matches is computed at the test-set level. However, considering the characteristics of the translation paired comparison method, the average of the utterancelevel scores might be more suitable. Therefore, we carried out experiments using both the original formulation and the average of utterance-level scores. For the average of utterance-level scores, considering the experimental results shown in Figure 7 , we used the maximum n-gram length of 2 for BLEU and 5 for NIST. Figure 8 shows the correlation between automatic measures and WH. In this figure, the abscissa represents the number of references used for automatic evaluation, and the ordinate represents Figure 9 shows the correlation between automatic measures and TOEIC score. In this figure, the abscissa and the ordinate represent the variable as Figure 8 . Figure 10 shows the system's TOEIC score using the proposed method. Here, the number of references is 16. In this figure, the ordinate represents the system's TOEIC score, and the broken line represents the system's TOEIC score using the original translation paired comparison method.",
"cite_spans": [],
"ref_spans": [
{
"start": 579,
"end": 587,
"text": "Figure 7",
"ref_id": null
},
{
"start": 654,
"end": 662,
"text": "Figure 8",
"ref_id": null
},
{
"start": 844,
"end": 852,
"text": "Figure 9",
"ref_id": "FIGREF8"
},
{
"start": 991,
"end": 999,
"text": "Figure 8",
"ref_id": null
},
{
"start": 1002,
"end": 1011,
"text": "Figure 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "CO",
"sec_num": null
},
{
"text": "In Figures 8, 9 and 10, white bars indicate the results using the original BLEU score, black bars indicate the results using the original NIST score, and gray bars indicate the results using the DPbased method. The bars with lines indicate the results using the original BLEU or NIST score, and those without lines indicate the results using the average of utterance-level scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 15,
"text": "Figures 8, 9",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "CO",
"sec_num": null
},
{
"text": "When we choose an automatic evaluation =BLEU (Original) IIBLEU(2 -grarn. utterance mean) IMNIST (Original) IMINIST (5-gram, utterance mean) pIDP Figure 10 : System's TOEIC score by proposed method method to apply to the proposed method, there are two points that needs to be considered. One is the ability to precisely evaluate human translations. This ability can be evaluated by the results in Figures 8 and 9 , and it affects confidence inter-val2 of the system 's TOEIC score. The other point to consider is the evaluation bias from the human's translation to the system's translation. This affects system's actual TOEIC score, which is shown in Figure 10 . Looking at Figures 8 and 9 , all of the automatic measures correlate highly with both WH and TOEIC score. In particular, the averaged utterance-level BLEU score shows the highest correlation. However, looking at Figure 10 , the system's TOEIC score using this measure deviates from that of the original translation paired comparison method.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 154,
"text": "Figure 10",
"ref_id": null
},
{
"start": 396,
"end": 411,
"text": "Figures 8 and 9",
"ref_id": "FIGREF8"
},
{
"start": 650,
"end": 659,
"text": "Figure 10",
"ref_id": null
},
{
"start": 673,
"end": 688,
"text": "Figures 8 and 9",
"ref_id": "FIGREF8"
},
{
"start": 874,
"end": 883,
"text": "Figure 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "CO",
"sec_num": null
},
{
"text": "From the viewpoint of the system's TOEIC score, the DP-based method gives the best result at 708 points, while the original translation paired comparison method yielded a score of 705. The original BLEU also gives a good result at a system TOEIC score of 712.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CO",
"sec_num": null
},
{
"text": "Considering the reductions in the evaluation costs and time, this automatic scheme shows a good performance and thus is very promising.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CO",
"sec_num": null
},
{
"text": "We proposed automatic schemes for the translation paired comparison method. In the experi- 2 The formula of the confidence interval is mentioned in the original paper of the translation paired comparison method (Sugaya et al., 2000) . ments, we applied currently available automatic evaluation methods: BLEU, NIST and a DP-based method. The target system evaluated was TDMT. We carried out two experiments: an utterance unit evaluation and a test set unit evaluation. According to the evaluation results, the utterance unit evaluation was insufficient to automatize the translation paired comparison method.",
"cite_spans": [
{
"start": 91,
"end": 92,
"text": "2",
"ref_id": null
},
{
"start": 211,
"end": 232,
"text": "(Sugaya et al., 2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "However, the test set unit evaluation using the DP-based method and the original BLEU gave good evaluation results. The system's TOEIC score using the DP-based method was 708 and that using BLEU was 712, while the original translation paired comparison method gave a score around of 705.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "To confirm the general effectiveness of the proposed method, we are conducting experiments on another system as well as the opposite translation direction, i.e., English to Japanese translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "' An automatic evaluation method for the proposed method does not have to be a certain kind. However, needless to add, a precise automatic evaluation method is ideal. The automatic evaluation methods that we applied to the proposed method are explained in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research reported here was supported in part by a contract with the Telecommunications Advancement Organization of Japan entitled, \"A study of speech dialogue translation technology based on a large corpus\". It was also supported in part by the Academic Frontier Project promoted by Doshisha University.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic Evaluation of Machine Translation Quality Using N-gram Co-Occurence Statistics",
"authors": [
{
"first": "",
"middle": [],
"last": "Nist",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NIST. 2002. Automatic Evaluation of Machine Translation Quality Us- ing N-gram Co-Occurence Statistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W.-J",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Pro- ceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311-318.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A new quantitative quality measure for machine translation systems",
"authors": [
{
"first": "K.-Y",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "J.-S",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 14th International Conference on Computational Linguistics(COLING)",
"volume": "",
"issue": "",
"pages": "433--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K.-Y. Su, M.-W. Wu, and J.-S. Chang. 1992. A new quantitative quality measure for ma- chine translation systems. In Proceed- ings of the 14th International Conference on Computational Linguistics(COLING), pages 433-439.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "End-to-end evaluation in ATR-MATRIX: speech translation system between English and Japanese",
"authors": [
{
"first": "F",
"middle": [],
"last": "Sugaya",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Yokoo",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of Eurospeech",
"volume": "",
"issue": "",
"pages": "2431--2434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Sugaya, T. Takezawa, A. Yokoo, and S. Ya- mamoto. 1999. End-to-end evaluation in ATR-MATRIX: speech translation system between English and Japanese. In Proceed- ings of Eurospeech, pages 2431-2434.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Evaluation of the atr-matrix speech translation system with a paired comparison method between the system and humans",
"authors": [
{
"first": "F",
"middle": [],
"last": "Sugaya",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Yokoo",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sagisaka",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of International Conference on Spoken Language Processing (ICSLP)",
"volume": "",
"issue": "",
"pages": "1105--1108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Sugaya, T. Takezawa, A. Yokoo, Y. Sag- isaka, and S. Yamamoto. 2000. Evalua- tion of the atr-matrix speech translation sys- tem with a paired comparison method be- tween the system and humans. In Proceed- ings of International Conference on Spo- ken Language Processing (ICSLP), pages 1105-1108.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Japanese-to-English speech translation system: ATR-MATRIX",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Morimoto",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sagisaka",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Campbell",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sugaya",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Yokoo",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of International Conference on Spoken Language Processing (ICSLP)",
"volume": "",
"issue": "",
"pages": "2779--2782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Takezawa, T. Morimoto, Y. Sagisaka, N. Campbell, H. Iida, F. Sugaya, A. Yokoo, and S. Yamamoto. 1998. A Japanese-to- English speech translation system: ATR- MATRIX. In Proceedings of International Conference on Spoken Language Process- ing (ICSLP), pages 2779-2782.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A new evaluation method for speech translation systems and a case study on ATR-MATRIX from Japanese to English",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sugaya",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Yokoo",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceeding of Machine Translation Summit (MT Summit)",
"volume": "",
"issue": "",
"pages": "299--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Takezawa, F. Sugaya, A. Yokoo, and S. Ya- mamoto. 1999. A new evaluation method for speech translation systems and a case study on ATR-MATRIX from Japanese to English. In Proceeding of Machine Trans- lation Summit (MT Summit), pages 299- 307.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Building a bilingual travel conversation database for speech translation research",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 2nd International Workshop on East-Asian Language Resources and Evaluation -Oriental COCOSDA Workshop '99",
"volume": "",
"issue": "",
"pages": "17--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Takezawa. 1999. Building a bilin- gual travel conversation database for speech translation research. In Proceedings of the 2nd International Workshop on East-Asian Language Resources and Evaluation -Ori- ental COCOSDA Workshop '99 -, pages 17-20.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Test of English for International Communication",
"authors": [
{
"first": "",
"middle": [],
"last": "Toeic",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "TOEIC. 2002. Test of English for International Communication.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Procedure of comparison by a bilingual speaker",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Procedure of Utterance Unit Evaluation method'. There are two kinds of units to apply an automatic evaluation method to the automatization of the translation paired comparison method. One is an utterance unit, and the other is a test set unit.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "log2 the number of occurence of wi...wn-i) the number of occurence of wi...w n -\"ref = the average number of words in a reference translation, averaged over all reference translationsL8y8 = the number of words in the translation being scored",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Figure 5: Evaluation results using translation paired comparison method",
"num": null
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"text": "Correct ratio of utterance unit evaluation (Number of references = 1) Correct ratio of utterance unit evaluation (Number of references = 16)",
"num": null
},
"FIGREF7": {
"type_str": "figure",
"uris": null,
"text": "Figure 8: Correlation between automatic measures and WH-",
"num": null
},
"FIGREF8": {
"type_str": "figure",
"uris": null,
"text": "Correlation between automatic measures and TOEIC score correlation. On the other hand,",
"num": null
}
}
}
}