|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:43:28.398085Z" |
|
}, |
|
"title": "Dual Conditional Cross Entropy Scores and LASER Similarity Scores for the WMT20 Parallel Corpus Filtering Shared Task", |
|
"authors": [ |
|
{ |
|
"first": "Felicia", |
|
"middle": [], |
|
"last": "Koerner", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes our submission to the WMT20 Parallel Corpus Filtering and Alignment for Low-Resource Conditions Shared Task. This year's corpora are noisy Khmer-English and Pashto-English, with 58.3 million and 11.6 million words respectively (English token count). Our submission focuses on filtering Pashto-English, building on previously successful methods to produce two sets of scores: LASER LM, a combination of the LASER similarity scores provided in the shared task and perplexity scores from language models, and DCCEF DUP, dual conditional cross entropy scores combined with a duplication penalty. We improve slightly on the LASER similarity score and find that the provided clean data can successfully be supplemented with a subsampled set of the noisy data, effectively increasing the training data for the models used for dual conditional cross entropy scoring.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes our submission to the WMT20 Parallel Corpus Filtering and Alignment for Low-Resource Conditions Shared Task. This year's corpora are noisy Khmer-English and Pashto-English, with 58.3 million and 11.6 million words respectively (English token count). Our submission focuses on filtering Pashto-English, building on previously successful methods to produce two sets of scores: LASER LM, a combination of the LASER similarity scores provided in the shared task and perplexity scores from language models, and DCCEF DUP, dual conditional cross entropy scores combined with a duplication penalty. We improve slightly on the LASER similarity score and find that the provided clean data can successfully be supplemented with a subsampled set of the noisy data, effectively increasing the training data for the models used for dual conditional cross entropy scoring.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Machine translation systems require large amounts of high quality parallel corpora for training. Neural machine translation models in particular have been found to both require more data (Koehn and Knowles, 2017) , and be more sensitive to noise in training data than statistical machine translation models. While these data can be acquired from online sources, the resulting crawled texts are often noisy and require filtering to produce large amounts of sufficiently clean training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 212, |
|
"text": "(Koehn and Knowles, 2017)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We refer readers to for a more detailed overview of methods for parallel corpus filtering, here we describe the most relevant methods to this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Most filtering methods employ some rule-based filtering, usually to prepare the data for other scoring methods, based on language models, classifiers, or other translation models. (S\u00e1nchez-Cartagena et al., 2018) apply hard rules to filter out data before using a classifier to score sentence pairs. (Rossenbach et al., 2018 ) use many rules, including limits on sentence length, Levenshtein distance, length ratio, and token ratio. We use basic language ID and overlap rules only for the Dual Conditional Cross Entropy Scores, this is described in more detail in subsection 5.1. The LASER similarity scores provided by the shared task organizers also apply a language ID filter (assigning the pair a score of 0 if either of the sentences are not recognized as the expected language).", |
|
"cite_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 212, |
|
"text": "(S\u00e1nchez-Cartagena et al., 2018)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 324, |
|
"text": "(Rossenbach et al., 2018", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule-based Filtering", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The most successful scoring method in the WMT18 Shared Task on Parallel Corpus Filtering was Dual Conditional Cross Entropy Filtering (dccef) (Junczys-Dowmunt, 2018). This method trains an NMT model in both translation directions, uses these to calculate the cross-entropy for each sentence, and finally produces a score based on their agreement. As this year's task deals with lowresource languages (contrary to WMT18, which was En-De), we explore a method to bootstrap the available clean data, thus producing more training data for the intermediate NMT models required for the method (described in more detail subsection 5.2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dual Conditional Cross Entropy Scores", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "LASER similarity scoring was the most successful scoring method of the WMT19 Shared Task on Parallel Corpus Filtering for Low-Resource Languages . This method embeds parallel sentences with Language Agnostic SEntence Representations (LASER) (Artetxe and Schwenk, 2018) , and uses these to compute cosine similarity scores. This work attempts to augment LASER similarity scores with language model scores (described in more detail in subsection 4).", |
|
"cite_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 268, |
|
"text": "(Artetxe and Schwenk, 2018)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LASER Similarity Scores", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "For this year's shared task on Parallel Corpus Filtering and Alignment for Low-Resource conditions, participants are asked to produce scores for each of the sentence pairs in the provided noisy 58.3 million-word (English token count) Khmer-English corpus and 11.6 million-word Pashto-English corpus. These scores are used to subsample sentence pairs amounting to 5 million English words. The resulting subset is evaluated by the quality of an NMT system (fairseq (Ott et al., 2019) ) trained on this data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 463, |
|
"end": 481, |
|
"text": "(Ott et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shared Task", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Participants were given the scripts to either train the evaluation system from scratch, or use the data to fine-tune a provided pretrained MBART model. The MBART model was trained on monolingual data, the details of which are described in (Liu et al., 2020) . The performance of the NMT system is measured by BLEU score on a held-out test set of Wikipedia translations. Participants may also provide re-alignments of the source and target sentences. The organizers provide clean parallel and monolingual data for both of the language pairs, as well as LASER similarity scores, a previously successful method in low-resource conditions (Chaudhary et al., 2019), .", |
|
"cite_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 257, |
|
"text": "(Liu et al., 2020)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shared Task", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We participated in the Pashto-English track only, after finding that the model-based methods we explored did not produce meaningful scores for Khmer-English. We did not submit sentence realignments, focusing instead on sentence filtering. Our submission builds on previously successful methods from past WMT shared tasks on parallel corpus filtering to produce two scores: LASER LM, a combination of the LASER similarity scores and perplexity scores from language models, and DCCEF DUP, dual conditional cross entropy scores combined with a a duplication penalty. All BLEU scores listed in this paper come from systems trained from scratch and run on the provided development data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shared Task", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A shortcoming of LASER similarity scores is that they may produce a false positive in the event that the source and target embeddings are similar to each other, but not good translations of each other. Consider, for example, a source and target pair in which the target is simply a copy of the source. This is clearly not a good translation; nothing has been translated. However, the embeddings would be exactly the same, and thus appear to be a very good match. This exact scenario is easily remedied by the use of a language identification filter, but other instances may be more difficult to root out. For example, a source and target sentence in which the target sentence is a string of literal word-for-word translations of the source sentence. To complement the LASER similarity scores and introduce some measure of fluency we train a language model for both English and Pashto.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LASER LM", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The LASER similarity scores provided are produced using the methodology outlined in the WMT19 submission . A language identification filter is applied, and sentences pairs with an overlap between source and target of greater than 60% are discarded. The similarity scores are based on the cosine similarity between the multilingual sentence embeddings in the learned embedding space, and normalized with a margin using the k nearest neighbors approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LASER Similarity Scores", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Language models were trained on the provided clean monolingual data. For the English language model was trained on the Wikipedia corpus with 67,796,935 sentences. The Pashto language model was trained on a concatenation of the Common-Crawl and Wikipedia corpora, with the Common-Crawl oversampled by a factor of 64 to produce a dataset of 9,273,763 sentences. The shuffled datasets were split 90/10 (train/test) with test split into 90/10 (dev/test). The language models were trained using fairseq (Ott et al., 2019) with the same settings as the WikiText103 example 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 498, |
|
"end": 516, |
|
"text": "(Ott et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Model Scores", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The language model, M , was used to produce per-sentence perplexity scores for each of the sentences in the corpus. Where s = w 1 , w 2 , ..., w n is a sentence of length n:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Model Scores", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "P P L M (s) = 2 \u2212 1 n log P (w 1 ,w 2 ,...,wn)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Model Scores", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "(1) scoring BLEU (%) LASER 9.67 LASER + 0.4 PPL SCORE 9.82 LASER + 0.5 PPL SCORE 9.81 LASER + 0.6 PPL SCORE 9.62 LASER + 0.7 PPL SCORE 9.75 LASER + 0.8 PPL SCORE 9.88 LASER + 0.9 PPL SCORE 9.94 LASER + 1.0 PPL SCORE 9.57 Perplexity scores for both sides (Pashto and English, H ps (x) and H en (x) respectively) are then added together.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Model Scores", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "PPL SCORE(x) = P P L Men (s en ) + P P L Mps (s ps )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Language Model Scores", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The language model scores and LASER similarity scores were combined to produce LASER LM. Both scores were normalised to fall in the range [0, 1] and the PPL SCORE subtracted from 1.0, such that lower perplexity corresponded to a higher score. Finally, the two scores were added together to produce the final score in the range [0, 2] . We experimented with different scaling factors f for the PPL SCORE. Table 1 shows the range of factors f explored to select the scaling factor used in the final score. Since the BLEU scores produced differed only slightly, we also evaluated the models on some of the provided clean data, randomly selecting 2500 lines (roughly the size of the provided devset) from each of the clean corpora, as well as 2500 lines of a shuffled concatenation (concat) of the clean corpora. Results are shown in table 2. For the most part, they did not vary greatly, and where they did there was no consistent winner across corpora. We choose a factor of 0.5, as the model resulting from these scores generally performed well, and, importantly, performed well on the provided devset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 327, |
|
"end": 330, |
|
"text": "[0,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 331, |
|
"end": 333, |
|
"text": "2]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 404, |
|
"end": 411, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Combining LASER and LM Scores", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "LASER LM = LASER + f \u2022 (1.0 \u2212 PPL SCORE)", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Combining LASER and LM Scores", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The dual conditional cross entropy scores produced state-of-the-art performance on the WMT18 shared task on filtering corpora for high-resource languages. However, this method requires two translation models trained in both the forward and backward direction. This presents a challenge in lowresource conditions due to the limited training data available. We find that the model quality can be improved by supplementing the provided clean data with a subsampled set consisting of 1M English tokens of the noisy data, subsampled based on the LASER similarity scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DCCEF DUP", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Sentence pairs in which one or both of the sentences did not match the expected language (English or Pashto) as determined by fastText 2 were given a score of 0, effectively removing this pair from consideration. This is a harsh filter, removing around 45% of sentence pairs. The resulting scores were scaled by the overlap between source and target sentence tokens, producing a sort of non-word token matching score. Note that this does not reward pairs that copy large portions of the source sentence to the target, as these are already removed by the language identification filtering.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Dual Conditional Cross Entropy Filtering (Junczys-Dowmunt, 2018) was found to be state of the art in the WMT18 high-resource data filtering task . The method uses two translation models in the forward and backward direction, which are used to compute crosslingual similarity scores. Given the translation model M , sentence pairs (x, y) from the noisy corpus were forcedecoded and a cross-entropy score produced:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dual Conditional Cross Entropy Scores", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "H M (y|x) = 1 |y| |y| t=1 log p M (y t |y [1,t\u22121],x ) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dual Conditional Cross Entropy Scores", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Cross-entropy scores for both directions (source-totarget and target-to-source, H F (y|x) and H B (x|y) respectively) are then averaged with a penalty on a large difference between the scores to produce the overall score: Translation models were trained using fairseq (Ott et al., 2019) with the same parameters used in the baseline flores model 3 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 268, |
|
"end": 286, |
|
"text": "(Ott et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dual Conditional Cross Entropy Scores", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "DCCEF(x, y) = H F (y|x) + H B (x|y) 2 \u2212 |H F (y|x) \u2212 H B (x|y)| (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dual Conditional Cross Entropy Scores", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We used the provided clean training data to train translation models in both directions, and used these models to produce a dccef score as described above. Initially only the dccef scores were used to filter the noisy data and train a system, we did not perform the preprocessing as described in 5.1. The BLEU score produced by this system is shown in 3 under clean.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dual Conditional Cross Entropy Scores", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We then supplemented the clean training data with a subsample of the noisy data and trained translation models in both directions on the augmented data. The subsample of 1 million English tokens and their translations was selected based on the provided LASER similarity score. Again, for this experiment only the dccef scores were used to filter the noisy data, no preprocessing was performed. As shown in Table 3 , supplementing the training data with the subsampled set resulted in an overall increase in 3.37 BLEU points.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 406, |
|
"end": 413, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dual Conditional Cross Entropy Scores", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Finally, we preprocessed the noisy data as described in 5.1 and used both sets of systems (one set trained on clean data, and one set trained on augmented data) to score the preprocessed data. As shown in Table 3 , there were further, significant gains, from preprocessing, and the dccef scores from the systems trained on augmented data outperformed the dccef scores from the systems trained on just the clean data. Prepocessing also reduced the gap between the performance of dccef scores produced by systems trained on just the clean data and the performance of dccef scores produced by systems trained on augmented data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 212, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dual Conditional Cross Entropy Scores", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "3 https://github.com/ facebookresearch/flores# train-a-baseline-transformer-model", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dual Conditional Cross Entropy Scores", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The scores were scaled by a duplication penalty for duplicate (greater than one) occurrences of either one or both of the target or source sentence of a pair in the corpus as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Duplication Penalty", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "dup penalty = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Duplication Penalty", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "1.0 neither side duplicate 0.9 one side duplicate 0.8 both sides duplicate (6) This resulted in a minor improvement in BLEU score on the development data, as seen in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 173, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Duplication Penalty", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Various other combinations of the aforementioned scores were explored, and the results are listed in Table 4 . Interestingly, the results suggest that the duplication penalty did not improve the LASER LM score, and combining the LASER LM and DCCEF DUP scores did not result in a better BLEU score. However, it should be noted that the differences in BLEU scores resulting from different combinations are generally minor and may not be statistically significant.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 108, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "None of the filtering methods significantly outperformed the LASER-based method, but the improved dccef filtering method can at least match the LASER-based method when the training data is augmented, and the preprocessing steps and duplication penalty are applied.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "This paper describes the our submission to the WMT20 Parallel Corpus Filtering Shared Task for low-resource conditions. We find that filtering based on dccef scores can compete with filtering based on LASER similarity scores when the models trained for the dccef scores are augmented with a subsample of the noisy data. challenges posed by limited data for model-based filtering methods can be somewhat mitigated by bootstrapping additional data from the noisy corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "https://github.com/pytorch/fairseq/ blob/master/examples/language_model/ README.md", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://fasttext.cc/docs/en/ language-identification.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe and Holger Schwenk. 2018. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Lowresource corpus filtering using multilingual sentence embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuqing", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Conference on Machine Translation", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "261--266", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-5435" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vishrav Chaudhary, Yuqing Tang, Francisco Guzm\u00e1n, Holger Schwenk, and Philipp Koehn. 2019. Low- resource corpus filtering using multilingual sentence embeddings. In Proceedings of the Fourth Confer- ence on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 261-266, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Dual conditional cross-entropy filtering of noisy parallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "888--895", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6478" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcin Junczys-Dowmunt. 2018. Dual conditional cross-entropy filtering of noisy parallel corpora. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 888-895, Belgium, Brussels. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "On the impact of various types of noise on neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Huda", |
|
"middle": [], |
|
"last": "Khayrallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--83", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-2709" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 74-83, Melbourne, Australia. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Findings of the WMT 2019 shared task on parallel corpus filtering for low-resource conditions", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juan", |
|
"middle": [], |
|
"last": "Pino", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Conference on Machine Translation", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "54--72", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-5404" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Francisco Guzm\u00e1n, Vishrav Chaud- hary, and Juan Pino. 2019. Findings of the WMT 2019 shared task on parallel corpus filtering for low-resource conditions. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 54-72, Flo- rence, Italy. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Findings of the WMT 2018 shared task on parallel corpus filtering", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huda", |
|
"middle": [], |
|
"last": "Khayrallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Heafield", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikel", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Forcada", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "726--739", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6453" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Huda Khayrallah, Kenneth Heafield, and Mikel L. Forcada. 2018. Findings of the WMT 2018 shared task on parallel corpus filtering. In Pro- ceedings of the Third Conference on Machine Trans- lation: Shared Task Papers, pages 726-739, Bel- gium, Brussels. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Six challenges for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Knowles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the First Workshop on Neural Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "28--39", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-3204" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Proceed- ings of the First Workshop on Neural Machine Trans- lation, pages 28-39, Vancouver. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Multilingual denoising pre-training for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marjan", |
|
"middle": [], |
|
"last": "Ghazvininejad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "fairseq: A fast, extensible toolkit for sequence modeling", |
|
"authors": [ |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "48--53", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-4009" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Min- nesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The RWTH aachen university filtering system for the WMT 2018 parallel corpus filtering task", |
|
"authors": [ |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Rossenbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Rosendahl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yunsu", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Gra\u00e7a", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aman", |
|
"middle": [], |
|
"last": "Gokrani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "946--954", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6487" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nick Rossenbach, Jan Rosendahl, Yunsu Kim, Miguel Gra\u00e7a, Aman Gokrani, and Hermann Ney. 2018. The RWTH aachen university filtering system for the WMT 2018 parallel corpus filtering task. In Pro- ceedings of the Third Conference on Machine Trans- lation: Shared Task Papers, pages 946-954, Bel- gium, Brussels. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Prompsit's submission to WMT 2018 parallel corpus filtering shared task", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "V\u00edctor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "S\u00e1nchez-Cartagena", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergio", |
|
"middle": [], |
|
"last": "Ba\u00f1\u00f3n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gema", |
|
"middle": [], |
|
"last": "Ortiz-Rojas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ram\u00edrez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "955--962", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6488" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V\u00edctor M. S\u00e1nchez-Cartagena, Marta Ba\u00f1\u00f3n, Sergio Ortiz-Rojas, and Gema Ram\u00edrez. 2018. Prompsit's submission to WMT 2018 parallel corpus filtering shared task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 955-962, Belgium, Brussels. Association for Com- putational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table><tr><td>:</td><td>Results on development data (training</td></tr><tr><td colspan=\"2\">from scratch) for different scaling factors of the</td></tr><tr><td colspan=\"2\">PPL SCORE.</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"text": "Results (BLEU(%)) on subsamples of clean data (training from scratch) for different scaling factors of the PPL SCORE.", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"text": "This suggests that training data for H en ,H ps scoring method", |
|
"content": "<table><tr><td/><td/><td>BLEU (%)</td></tr><tr><td>clean</td><td>dccef</td><td>3.97</td></tr><tr><td>clean + top 1M noisy</td><td>dccef</td><td>7.34</td></tr><tr><td>clean</td><td>dccef + preprocessing</td><td>8.93</td></tr><tr><td>clean + top 1M noisy</td><td>dccef + preprocessing</td><td>9.68</td></tr><tr><td>clean + top 1M noisy</td><td colspan=\"2\">(dccef + preprocessing) \u2022 dup penalty 9.94</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"text": "Results on development data (training from scratch) for dccef scores.training data for H en ,H ps (dccef) scoring method", |
|
"content": "<table><tr><td>BLEU (%)</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"text": "Results on development data (training from scratch). Bolded scores are the two scores submitted. All dccef scores reported in this table were combined with preprocessing as described in 5.1", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |