|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:42:05.421233Z" |
|
}, |
|
"title": "Data Selection for Unsupervised Translation of German-Upper Sorbian", |
|
"authors": [ |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Toral", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Cognition University of Groningen", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes the methods behind the systems submitted by the University of Groningen for the WMT 2020 Unsupervised Machine Translation task for German-Upper Sorbian. We investigate the usefulness of data selection in the unsupervised setting. We find that we can perform data selection using a pretrained model and show that the quality of a set of sentences or documents can have a great impact on the performance of the unsupervised neural machine translation (UNMT) system trained on it. Furthermore, we show that documentlevel data selection should be preferred for training the state-of-the-art UNMT model, the XLM model, when possible. Finally, we show that there is a trade-off between quality and quantity of the data used to train UNMT systems.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes the methods behind the systems submitted by the University of Groningen for the WMT 2020 Unsupervised Machine Translation task for German-Upper Sorbian. We investigate the usefulness of data selection in the unsupervised setting. We find that we can perform data selection using a pretrained model and show that the quality of a set of sentences or documents can have a great impact on the performance of the unsupervised neural machine translation (UNMT) system trained on it. Furthermore, we show that documentlevel data selection should be preferred for training the state-of-the-art UNMT model, the XLM model, when possible. Finally, we show that there is a trade-off between quality and quantity of the data used to train UNMT systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Unsupervised Neural Machine Translation (UNMT) has recently become the dominant paradigm for unsupervised MT, with the advent of cross-lingual language model pretraining as used in the XLM model (Conneau and Lample, 2019) . However, much of the existing research in UNMT assumes that the amount of data available for one language is roughly equivalent to the other. The WMT 2020 Unsupervised Machine Translation task is unique in that monolingual data is abundant for one language (German), with hundreds of millions of sentences available, and sparse for the other (Upper Sorbian), which only has around 750 thousand sentences available. With a wealth of data available on the German side, it is natural to ask: how can we best use this data? Viewing this under the lens of data selection, we break this broad question down into 3 concrete sub-questions, tailored for the unsupervised setting. They are as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 221, |
|
"text": "(Conneau and Lample, 2019)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 How can we determine the quality of training data?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 What kinds of data selection are best for training an XLM model?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Is quality or quantity more important when it comes to training data for UNMT?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Section 2 describes the general setup pertaining to every experiment, including datasets, data processing steps, model architecture, and training details. In Section 3, we detail our individual experiments and their corresponding results. Finally, in Section 4, we make our conclusions and discuss paths for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For Upper Sorbian, we use the 3 monolingual datasets provided by the Sorbian Institute, the Witaj Sprachzentrum, and the web data from CIS, LMU. We also use the Upper Sorbian side of the parallel corpus from train.hsb-de.hsb.gz. For German, we use monolingual data from News Crawl and Common Crawl. For validation and testing, we use the data provided in devtest.tar.gz.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "All data is tokenized and truecased using the Moses toolkit (Koehn et al., 2007) . For BPE segmentation (Sennrich et al., 2016) , we apply a joint segmentation for both languages. This is done by first taking a sample of the German data of the same length as the Upper Sorbian data (around 750 thousand sentences). The BPE codes are learned and applied using FastBPE. 1 After BPE is applied, we remove duplicate sentences while retaining the order of the corpora. 2 We used the XLM model (Conneau and Lample, 2019) using the default parameters, with the excep-tion of allowing for sentences of max length 200 rather than 100. 3 The language model pretraining step includes only masked language modelling, and training is limited to 24 hours. The NMT step is also limited to 24 hours, with the additional stopping criterion of no improvement on the DE\u2192HSB validation set for 10 epochs. 4", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 80, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 104, |
|
"end": 127, |
|
"text": "(Sennrich et al., 2016)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 464, |
|
"end": 465, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 514, |
|
"text": "(Conneau and Lample, 2019)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For all of our data selection experiments, we start by training an initial model. Our initial model is trained on 10 million German sentences and all of the available Upper Sorbian sentences. The 10 million German sentences include all of the data from years 2007 and 2010, and the remaining sentences are taken from 2014. 5 Our initial model achieves BLEU scores of 17.43 and 19.05 for DE\u2192HSB and HSB\u2192DE respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We apply two forms of data selection: sentencelevel and document-level. As we have an abundance of German data (D) and limited Upper Sorbian data (H), we are only concerned with data selection for German. To select from D, we first must score our data in terms of its potential to improve the performance of our NMT model. Drawing inspiration from Moore and Lewis (2010), our scoring function is as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Selection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Score(s) = LM H\u2192D (s) \u2212 LM D (s) |s|", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Selection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In this equation, s refers to any sentence in the German data, |s| to its token length, LM X (s) to the log probability of s using a language model trained on dataset X , and H \u2192 D to the dataset obtained by translating H into German using the initial system. A high scoring sentence is thus a sentence that has a high probability according to the Upper Sorbian language model compared to that of the German language model. The language model we use is KenLM (Heafield et al., 2013) . We use a trigram model, with all other parameters being the default values. Since we require a portion of the German dataset to train the model, we choose N sentences randomly, with N being equal to the number of sentences in H. 7 These sentences are not included during the selection process.", |
|
"cite_spans": [ |
|
{ |
|
"start": 459, |
|
"end": 482, |
|
"text": "(Heafield et al., 2013)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Selection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For sentence-level selection, we simply order each sentence based on score and select the sentences with the highest scores. For document-level selection, we score each document by averaging its sentence-level scores, and select the documents with the highest scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Selection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To answer our first research question, we show that systems trained on the highest scoring sentences and documents perform significantly better than those trained on the lowest scoring sentences and documents. For this experiment, we start with 10 million sentences from News Crawl 2015, and score each sentence and document. We then train models on the 2 million lowest and highest scoring sentences, as well as the lowest and highest scoring documents which total 2 million sentences in length. The results are shown in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 522, |
|
"end": 529, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Selection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The results show a drastic improvement from using the lowest quality sentences to the highest according to our scoring function. This applies both at the sentence and document level. However only document-level filtering outperforms random selection. We speculate that this is due to a potential lack of variety in the sentence-level filtering, as it may select sentences with substantial trigram overlap, due to their similarly high score. This would be less of an issue on the document-level, since there is a smaller likelihood for two documents to have a high degree of overlap. A potential solution to this lack of variety would be to select sentences sequentially, enforcing a word overlap constraint. This would limit the number of words a sentence could share with previously selected sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Selection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We see from Table 1 that document-level selection outperforms sentence-level selection. This could be for 2 reasons: either the sentences selected are higher quality on average or the language model pretraining step for the XLM model benefits more from documents than sentences. To further explain the latter reason, the pretraining step for XLM uses streams of text which can contain multiple sentences, so sentences being in order should be beneficial for training the language model. To test this, we take the document-level selected sentences and shuffle their order and train a new model. With a shuffled dataset, we obtained far lower BLEU scores of 12.84 and 16.73 for DE\u2192HSB and HSB\u2192DE respectively. As these BLEU scores are lower than even the scores obtained via sentence-level selection, we can conclude that the XLM model greatly benefits from sentences being in order for pretraining. However, it does appear that sentence-level selection provides higher quality sentences individually.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Document-level versus sentence-level", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "With both selection methods, we can choose a threshold to determine how many sentences we should use for training our model. We start by selecting roughly 93 million sentences from News Crawl 2007-2019. 8 We chose the first 10 million sentences from each year, apart from 2008 and 2009, which only contain roughly 6.5 million sentences each. The sentences are chosen at the document-level. From the 93 million sentences combined, we use document-level selection to choose various amounts of data, varying from 1 million to 20 million sentences, and train models on each. The results are shown in Table 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 204, |
|
"text": "8", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 596, |
|
"end": 603, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quality versus quantity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As we can see, selecting 5 million sentences results in the highest BLEU scores. As data is either added or removed, the performance drops by around 1-2 BLEU. Given the nature of attentionbased neural models, it is somewhat surprising to see that using more data is not helpful and in fact potentially harmful. Whether this is a peculiarity of the German-Upper Sorbian data or not requires further investigation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality versus quantity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As a portion of the Upper Sorbian data is crawled from the web, we also perform data selection on Common Crawl. Since document boundaries are not available for Common Crawl, we can only use sentence-level selection. 9 We tested using various amounts of data in addition to the 5 million News Crawl sentences and report results in Table 3 . As we can see the system with 5 million News Crawl sentences and 5 million Common Crawl sentences performed the best. While the improvements are marginal, this may be due to a similar phenomenon as in Table 2 , where too much monolingual data is not beneficial.", |
|
"cite_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 217, |
|
"text": "9", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 330, |
|
"end": 337, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 541, |
|
"end": 548, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Using Common Crawl data", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Since we saw improvements from one round of data selection, it would stand to reason that using a more accurate model to translate the Upper Sorbian data to German would result in potentially better data selection. As such, we use our model trained on 5 million sentences selected from News Crawl to translate the Upper Sorbian data into German, and apply the same data selection process on the roughly 93 million sentences as before.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Iterative data selection", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "The results on the second iteration are markedly worse, with BLEU scores of 15.9 and 17.45, on DE\u2192HSB and HSB\u2192DE, respectively, compared to the original scores of 17.18 and 19.32. We suspect that this is due to the same data being used for training the NMT system and for selection, despite the data being used to train the KenLM models being different. 10 This highlights a major downside of data selection using our methods: data cannot be used both for training a selection model and for the selection itself. The most likely reason for this is that the model will give all sentences that appear in the original training set higher scores, and documents which include the same or similar sentences will be chosen over documents that are more unique, effectively leading to an overfitting problem. This then raises a question of trade-off: is it better to use worse quality data to train the initial model and to then select from better quality data, or vice versa? Our results seem to indicate the former, but further research is required to get a definitive answer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 354, |
|
"end": 356, |
|
"text": "10", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Iterative data selection", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "To further analyze the data selected by the model, we look at the frequencies of words that appear in the selected data. We compare our documentfiltered data from Section 3.1 with the data from the Upper Sorbian side for 10 word roots in Table 4. These word roots are selected manually as the correctly translated root is easy to verify (with Wikipedia and Wiktionary), and the translations are also one-to-one (ignoring the suffixes). We also select roots with varying frequency within the Upper Sorbian dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Further Analysis", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "As we can see, the high-quality documentfiltered data has higher relative frequencies for the first 7 out of 10 word roots, and the lower-quality data has higher frequencies for the last 3. As the words are in order of frequency within the Upper Sorbian dataset, this indicates that the higher quality filtered data better represents the topics found in the Upper Sorbian dataset. Roots such as Sorbiaand Bautzen (a city where Sorbian is spoken) appear far more often in the higher quality data, despite being relatively uncommon in the German dataset. The last 3 words are relatively rare in the Upper Sorbian data, so it makes sense that the higher quality filtered data would have fewer occurrences of these words. Although most of the examples are related to locations, we do see that Domowin-(the root for Domowina, a non-profit organization) and Catholic-appear to show the same trends.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Further Analysis", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "We also looked at the relative frequencies of the years 2000-2025 across our various models to see the effect of our filtering methods in matching the Upper Sorbian data according to year. We expect that the filtered German data with the frequency distribution most closely matching the frequency distribution of the Upper Sorbian data will have the strongest NMT performance. We show the results in Figure 1 . Our initial model predictably has spikes in frequency at 2007, 2010, and 2014 as we manually chose data from these years to somewhat match the frequency of the Upper Sorbian data. Meanwhile, the 5 million document-level selected sentences from News Crawl seems to more closely match the frequencies in the Upper Sorbian data from 2000 to 2010, but has larger relative frequencies for years 2010 to 2020. We suspect that this is due to the limitation of the data available for selection, as earlier years have fewer sentences for the selection model to choose. Finally, the model using 5 million News Crawl and 5 million Common Crawl sentences has a frequency graph that most closely matches the graph of the Upper Sorbian data. The similarity of the Upper Sorbian graph to the other graphs seems to correlate with the resulting BLEU scores of the NMT model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 400, |
|
"end": 408, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Further Analysis", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "In the UNMT setting where one has access to a wealth of resources for one language, we investigated the feasibility of data selection. We attempt both document-level and sentence-level selection, Table 4 : Frequencies of word roots within the Upper Sorbian (HSB), and relative frequencies of low-quality document-filtered (Doc Low) and high-quality document-filtered (Doc High) datasets. Relative frequency is based on the total frequency of each root within the 10 million sentences that the sets are selected from (i.e. the DE count column). Case is ignored when determining frequency.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 203, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "finding that both methods are capable of distinguishing low quality data from high quality data, with quality in this case defined as the efficacy for training an XLM model. We found that while document-level selection chooses poorer sentences on average, the XLM model can leverage the intersentence information to achieve better results than when simply using the highest quality sentences. We also found that there appears to be a point where adding more monolingual data is not beneficial, but rather potentially harmful, indicating a need for data selection. Finally, we noted some potential drawbacks to using this form of data selection, particularly that data cannot be used for both initial training of the NMT model and subsequent selection. Future work could continue along many avenues, such as the effectiveness of data selection on other language pairs, or even on the Upper Sorbian side.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "https://github.com/glample/fastBPE 2 For document-level filtering, we do not remove duplicates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The max length increase was found to perform slightly better in early testing.4 Both steps are limited to 24 hours as there was little to no improvement observed beyond 24 hours in preliminary tests.5 We choose these years because we found that the frequencies of \"20XX\" in the Upper Sorbian data peak at 2005, 2010, and 2014, and 2007 is the earliest News Crawl data available.6 The intuition behind subtracting the score of the German language model is that without it a sentence may have a high score due to it containing frequent words in general (e.g. \"the\") rather than words that are particularly frequent in the Upper Sorbian dataset (e.g. \"Sorbia\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The choice of N follows Moore and Lewis (2010).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We exclude years 2007, 10, and 14 as they are used for training our initial model and thus may affect the selection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our finding that randomly selected sentences indeed perform better was done post-hoc, which is why we use sentences selected with the highest scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also saw similar performance drops when trying to include the data from years 2007, 10, and 14 in our original model trained on selected data, as these years were used to train the initial system used for selection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Crosslingual language model pretraining", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7059--7069", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems, pages 7059-7069.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Scalable modified kneserney language model estimation", |
|
"authors": [ |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Heafield", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Pouzyrevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "690--696", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H Clark, and Philipp Koehn. 2013. Scalable modified kneser- ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 690-696.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Moses: Open source toolkit for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brooke", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wade", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Moran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Zens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Constantin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evan", |
|
"middle": [], |
|
"last": "Herbst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07, pages 177-180, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Intelligent selection of language model training data", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Robert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the ACL 2010 Conference Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "220--224", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In Pro- ceedings of the ACL 2010 Conference Short Papers, pages 220-224, Uppsala, Sweden. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1715--1725", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Relative frequencies of the years 2000-2025 within the various datasets. The frequencies are relative to the total number of sentences in that dataset.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"num": null, |
|
"text": "Table 1: BLEU scores for XLM trained on data selected with the lowest and highest sentence and documentlevel scores, as well as randomly selected sentences and documents.", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Selection Type</td><td colspan=\"2\">DE\u2192HSB HSB\u2192DE</td></tr><tr><td>Sentence -Low</td><td>5.21</td><td>5.91</td></tr><tr><td>Sentence -Random</td><td>16.98</td><td>18.45</td></tr><tr><td>Sentence -High</td><td>15.08</td><td>18.05</td></tr><tr><td>Document -Low</td><td>9.32</td><td>8.46</td></tr><tr><td>Document -Random</td><td>17.03</td><td>18.19</td></tr><tr><td>Document -High</td><td>17.60</td><td>19.23</td></tr><tr><td>6</td><td/><td/></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"text": "BLEU scores of models trained on varying amounts of document-level selected data.", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"3\">Sentences (M) DE\u2192HSB HSB\u2192DE</td></tr><tr><td>2</td><td>17.76</td><td>19.19</td></tr><tr><td>5</td><td>18.04</td><td>19.57</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>: BLEU scores of models trained using 5 mil-</td></tr><tr><td>lion sentences from News Crawl and various amounts</td></tr><tr><td>of sentences from Common Crawl.</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |