|
{ |
|
"paper_id": "E03-1050", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:25:18.187948Z" |
|
}, |
|
"title": "Using Noisy Bilingual Data for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Vogel", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "SMT systems rely on sufficient amount of parallel corpora to train the translation model. This paper investigates possibilities to use word-to-word and phrase-to-phrase translations extracted not only from clean parallel corpora but also from noisy comparable corpora. Translation results for a Chinese to English translation task are given.", |
|
"pdf_parse": { |
|
"paper_id": "E03-1050", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "SMT systems rely on sufficient amount of parallel corpora to train the translation model. This paper investigates possibilities to use word-to-word and phrase-to-phrase translations extracted not only from clean parallel corpora but also from noisy comparable corpora. Translation results for a Chinese to English translation task are given.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Statistical machine translation systems typically use a translation model trained on bilingual data and a language model for the target language, trained on perhaps some larger monolingual data. Often the amount of clean parallel data is limited. This leads to the question of whether translation quality can be improved by using additional noisier bilingual data. Some approaches, like (Fung and MxKeown, 1997) , have been developed to extract word translations from non-parallel corpora. In (Munteanu and Marcu, 2002) bilingual suffix trees are used to extract parallel sequences of words from a comparable corpus. 95% of those phrase translation pairs were judged to be correct. However, no results where reported if these additional translation correspondences resulted in improved translation quality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 387, |
|
"end": 411, |
|
"text": "(Fung and MxKeown, 1997)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 493, |
|
"end": 519, |
|
"text": "(Munteanu and Marcu, 2002)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Statistical translation as introduced in (Brown et al., 1993) is based on word-to-word translations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 61, |
|
"text": "(Brown et al., 1993)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The SMT System", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The SMT system used in this study relies on multiword to multi-word translations. The term phrase translations will be used throughout this paper without implying that these multi-word translation pairs are phrases in some linguistic sense. Phrase translations can be extracted from the Viterbi alignment of the alignment model. Phrase translation pairs are seen only a few times. Actually, most of the longer phrases are seen only once in even the larger corpora. Using relative frequency to estimate the translation probability would make most of the phrase translation probabilities 1.0. This would lead to two consequences: First, phrase translation would always be preferred over a translation generated using word translations from the statistical and manual lexicons, even if the phrase translation is wrong, due to misalignment. Secondly, two translations would often have the same probability. As the language model probability is larger for shorter phrases this will usually result in overall shorter sentences, which sometimes are too short.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The SMT System", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To make phrase translations comparable to the word translations the translation probability is calculated on the basis of the word translation probabilities resulting from IBM 1-type alignment. This now gives the desired property that longer (1) translations get higher probabilities. If the additional word should not be part of the phrase translation then these additional probabilities kb ei) which go into the sum will be small, i.e. the phrase translation probabilities will be very similar and the language model gives a bias toward the shorter translation. If, however, this additional word is actually the translation of one of the words in the source phrase then the additional probabilities going into the summation are large, resulting in an overall larger phrase translation probability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The SMT System", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "More importantly, calculating the phrase translation probability on the basis of word translation probabilities increases the robustness. Wrong phrase pairs resulting from errors in the Viterbi alignment will have a low probability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The SMT System", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To train the Chinese-to-English translation system 4 different corpora were used: 1) Chinese tree-bank data (LDC2002E17): this is a small corpus (90K words) for which a tree-bank has been built. 2) Chinese news stories, collected and translated by the Foreign Broadcast Information Service (FBIS). 3) Hong Kong news corpus distributed through LDC (LDC2000T46). 4) Xinhua news: Chinese and English news stories publish by the Xinhua news agency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Corpora", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The first three corpora are truly bilingual corpora in that the English part is actually a translation of the Chinese. Together, the form the clean corpus which has 9.7 million words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Corpora", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The Xinhua news corpus (XN) is not a parallel corpus. The Chinese and English news stories are typically not translations of each other. The Chinese news contains more national news whereas the English news is more about international events. Only a small percentage of all stories is close enough to be considered as comparable. Identification of these story pairs was done automatically at LDC using lexical information as described in (Xiaoyi Ma, 1999) . In this approach a document B is considered an approximate translation of document A if the similarity between A and B is above some threshold, where similarity is defined as the ratio of tokens from A for which a translation appears in document B in a nearby position. The document with the highest similarity is selected. For the Xinhua News corpus less then 2% of the entire news stories could be aligned. Inspection showed that even these pairs can not be considered to be true translations of each other.", |
|
"cite_spans": [ |
|
{ |
|
"start": 438, |
|
"end": 455, |
|
"text": "(Xiaoyi Ma, 1999)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Corpora", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In our translation experiments we also used the LDC Chinese English dictionary (LDC2002E27). This dictionary has about 53,000 Chinese entries with on average 3 translations each.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Corpora", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The FBIS, Hong Kong news and Xinhua news corpora all required sentence alignment. Different sentence alignment methods have been proposed and shown to give reliable results for parallel corpora. For non-parallel but comparable corpora sentence alignment is more challenging as it requires -in addition to finding a good alignmentalso a means to distinguish between sentence pairs which are likely to be translations of each other and those which are aligned to each other but can not be considered translations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Corpora", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "An iterative approach to sentence alignment for this kind of noisy data has been described in (Bing Zhao, 2002) . This approached was used to sentence align the Xinhua News stories. Sentence length and lexical information is used to calculate sentence alignment scores. The alignment algorithm allows for insertions and deletions. These sentences are removed as are sentence pairs which have a low overall sentence alignment score. About 30% of the sentence pairs were deleted to result in the final corpus of 2.7 million words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 111, |
|
"text": "(Bing Zhao, 2002)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Corpora", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The test data used in the following analysis and also in the translation experiments is a set of 993 sentences from different Chinese news wires, which has been used in the TIDES MT evaluation in December 2001.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Corpora", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To get good translations requires first of all that the vocabulary of the test sentences is well covered by the training data. Coverage can be expressed in terms of tokens, i.e. how many of the tokens in the test sentences are covered by the vocabulary of the training corpus, and in terms of types, i.e. how many of the word types in the test sentences have been seen in the training data. A problem with Chinese is of course that the vocabulary depends heavily on the word segmentation. In a way the vocabulary has to be determined first, as a word list is typically used to do the segmentation. There is a certain trade-off: a large word list for segmentation will result in more unseen words in the test sentences with respect to a training corpus. A small word list will lead to more errors in segmentation. For the experiments reported in this paper a word list with 43, 959 entries was used for word segmentation. Table 1 gives corpus and vocabulary coverage for each of the Chinese corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 921, |
|
"end": 928, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis: Vocabulary Coverage", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our statistical translation system uses not only word-to-word translations but also phrase translations. The more phrases in the test sentences are found in the training data, the better. And longer phrases will generally result in better translations, as they show larger cohesiveness and better word order in the target language. The n-gram coverage analysis takes all n-grams from the test sentences for n=2, n=3, ... and finds all occurrences of these n-grams in the different training corpora. From Table 2 we see that the Xinhua news corpus, which is only about a quarter of the size of the clean data, contains a much larger number of long word sequences occurring also in the test data. This is no surprise, as part of the test sentences come from Xinhua news, even though they date from a year not included in the training data. Adding this corpus to the other training data therefore gives the potential to extract more and longer phrase to phrase translations which could result in better translations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 504, |
|
"end": 511, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis: N-gram coverage", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Many of the detected n-grams are actually overlapping, resulting from a very small number of very long matches was detected. And each n-gram contains m (n-m+1)-grams. The longest matching n-grams in the Xinhua news corpus were 56, 53, 43, 34, 31, 28, 24, 21 words long, each occurring once. Brown et al., 1993) and HMM alignments (Vogel et al., 1996) were trained for both the clean parallel corpus and for the extended corpus with the noisy Xinhua News data. The alignment models were trained for Chinese to English as well as English to Chinese. Phrase-tophrase translations were extracted from the Viterbi path of the HMM alignment. The reverse alignment, i.e. English to Chinese, was used for phrase pair extraction as this resulted in higher translation quality in our experiments. The translation probabilities, however, where calculated using the lexicon trained with the IBM1 Chinese to English alignment. Table 3 gives the alignment perplexities for the different runs. English to Chinese alignment gives lower perplexity than Chinese to English. Adding the noisy Xinhua news data leads to significantly higher alignment perplexities. In this situation, the additional data gives us more and longer phrase translations, but the translations are less reliable. And the question is, what is the overall effect on translation quality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 230, |
|
"text": "56,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 234, |
|
"text": "53,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 238, |
|
"text": "43,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 239, |
|
"end": 242, |
|
"text": "34,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 246, |
|
"text": "31,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 250, |
|
"text": "28,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 254, |
|
"text": "24,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 257, |
|
"text": "21", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 310, |
|
"text": "Brown et al., 1993)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 350, |
|
"text": "(Vogel et al., 1996)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 914, |
|
"end": 921, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis: N-gram coverage", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The decoder uses a translation model (the LDC glossary, the IBM1 lexicon, and the phrase translation) and a language model to find the best translation. The first experiment was designed to amplify the effect the noisy data has on the translation model by using an oracle language model built from the reference translations. This language model will pick optimal or nearly optimal translations, given a translation model. To evaluate translation quality the NIST MTeval scoring script was used (MTeval, 2002) . Using word and phrase translations extracted form the clean parallel data resulted in an MTeval score of 8.12. Adding the Xinhua News corpus improved the translation significantly to 8.75. This shows that useful translations have been extracted from the additional noisy data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 495, |
|
"end": 509, |
|
"text": "(MTeval, 2002)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The next step was to test if this improvement is also possible when using a proper language model. The language model used was trained on a corpus of 100 million words from the English news stories published by the Xinhua News Agency between 1992 and 2001. Unfortunately, the MTeval score dropped from 7.59 to 7.31 when adding the noisy data. Restricting the lexicon, however, to a small number of high probabilty translations, thereby reducing the noise in the lexiocn, the score improved only marginally for the clean data system, but considerably for the noisy data system. The noisy data system then outperformed the clean data system. These results are summarized in Table 4. A t-test run on the sentence level scores showed that the difference between 7.62 and 7.69 is statistically significant at the 99% level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Initial translation experiments have shown that using word and phrase translations extracted from A detailed analysis will be carried out to see how the different training corpora contributed to the translations. This will include a human evaluation of the quality of phrase translations extracted from the noisier data. Next steps will include training the statistical lexicon on clean data only and using this to filter the phrase-to-phrase translations extracted from comparable corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Bing Zhao and Stephan Vogel, 2002. Adaptive Parallel Sentence Mining from Web Bilingual News Collection. ICDM '02: The 2002 IEEE International Conference on Data Mining , Maebashi City, Japan, December 2002.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"A Della" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer, \"The Mathe- matics of Statistical Machine Translation: Parame- ter Estimation,\" Computational Linguistics, vol. 19, no. 2, pp. 263-311,1993.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A Technical Word-and Term-Translation Aid Using Noisy Parallel Corpora across Language Groups", |
|
"authors": [ |
|
{ |
|
"first": "Pascale", |
|
"middle": [], |
|
"last": "Fung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Machine Translation", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "53--87", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pascale Fung and Kathleen McKeown. 1997. A Tech- nical Word-and Term-Translation Aid Using Noisy Parallel Corpora across Language Groups. In Ma- chine Translation, volume 12, numbers 1-2 (Special issue), Kluwer Academic Publisher, Dordrecht, The Netherlands, pp. 53-87.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "BITS: A Method for Bilingual Text Search over the Web", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoyi", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Lieberman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoyi Ma and Mark Y. Lieberman. 1999. BITS: A Method for Bilingual Text Search over the Web..", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Machine Translation Summit VII", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Machine Translation Summit VII.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Processing Comparable Corpora With Bilingual Suffi x Trees", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Dragos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Munteanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dragos Stefan Munteanu and Daniel Marcu. 2002. Processing Comparable Corpora With Bilingual Suffi x Trees. Empirical Methods in Natural Lan- guage Processing , Philadelphia, PA.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "NIST MT evaluation kit version 9", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "NIST MT evaluation kit version 9. Available at: http://www.nist.gov/speechltests/mtl.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "HMM-based Word Alignment in Statistical Translation", |
|
"authors": [ |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Vogel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Tillmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "COLING '96: The 16th Int. Conf. on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "836--841", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephan Vogel, Hermann Ney, and Christoph Tillmann, HMM-based Word Alignment in Statistical Transla- tion, in COLING '96: The 16th Int. Conf. on Com- putational Linguistics, Copenhagen, August 1996, pp. 836-841.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"html": null, |
|
"text": "Corpus coverage (C-Voc) and vocabulary coverage of the test data given different training corpora.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Corpus</td><td>Voc</td><td colspan=\"2\">C-Coy V-Coy</td></tr><tr><td>Clean</td><td colspan=\"2\">46,706 99.51</td><td>97.89</td></tr><tr><td>Clean + XN</td><td colspan=\"2\">69,269 99.80</td><td>98.88</td></tr><tr><td colspan=\"3\">Clean + XN + LDC 74,014 99.84</td><td>99.10</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "Number of n-grams from test sentences found in the different corpora.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>n</td><td>Clean</td><td colspan=\"2\">XN Clean + XN</td></tr><tr><td>2</td><td colspan=\"2\">12621 11503</td><td>13683</td></tr><tr><td>3</td><td>6990</td><td>6525</td><td>8663</td></tr><tr><td>4</td><td>2396</td><td>2735</td><td>3628</td></tr><tr><td>5</td><td>810</td><td>1283</td><td>1611</td></tr><tr><td>6</td><td>314</td><td>745</td><td>884</td></tr><tr><td>7</td><td>123</td><td>486</td><td>545</td></tr><tr><td>8</td><td>53</td><td>368</td><td>395</td></tr><tr><td>9</td><td>29</td><td>310</td><td>321</td></tr><tr><td>10</td><td>18</td><td>275</td><td>281</td></tr><tr><td colspan=\"4\">3.4 Training the Alignment Models</td></tr><tr><td colspan=\"2\">IBM1 alignments (</td><td/><td/></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"3\">: Training perplexity for clean and clean</td></tr><tr><td>plus noisy data.</td><td/><td/></tr><tr><td>Model</td><td colspan=\"2\">Clean Clean + XN</td></tr><tr><td>IBM1</td><td>123.44</td><td>142.85</td></tr><tr><td colspan=\"2\">IBM 1-rev 105.72</td><td>120.48</td></tr><tr><td>HMM</td><td>101.34</td><td>121.34</td></tr><tr><td>HMM-rev</td><td>78.61</td><td>92.79</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "Translation results.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>System Setup</td><td colspan=\"2\">Clean Noisy</td></tr><tr><td>LM-Oracle</td><td>8.12</td><td>8.75</td></tr><tr><td>LM-100m</td><td>7.59</td><td>7.31</td></tr><tr><td>LM-100m, lexicon prunded</td><td>7.62</td><td>7.69</td></tr><tr><td colspan=\"3\">noisy parallel data can improve translation quality.</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |