|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:30:59.667589Z" |
|
}, |
|
"title": "Beyond Static Models and Test Sets: Benchmarking the Potential of Pre-trained Models Across Tasks and Languages", |
|
"authors": [ |
|
{ |
|
"first": "Kabir", |
|
"middle": [], |
|
"last": "Ahuja", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Sandipan", |
|
"middle": [], |
|
"last": "Dandapat", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Sunayana", |
|
"middle": [], |
|
"last": "Sitaram", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Monojit", |
|
"middle": [], |
|
"last": "Choudhury", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Microsoft", |
|
"middle": [], |
|
"last": "Research", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Microsoft", |
|
"middle": [], |
|
"last": "R&d", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Although recent Massively Multilingual Language Models (MMLMs) like mBERT and XLMR support around 100 languages, most existing multilingual NLP benchmarks provide evaluation data in only a handful of these languages with little linguistic diversity. We argue that this makes the existing practices in multilingual evaluation unreliable and does not provide a full picture of the performance of MMLMs across the linguistic landscape. We propose that the recent work done in Performance Prediction for NLP tasks can serve as a potential solution in fixing benchmarking in Multilingual NLP by utilizing features related to data and language typology to estimate the performance of an MMLM on different languages. We compare performance prediction with translating test data with a case study on four different multilingual datasets, and observe that these methods can provide reliable estimates of the performance that are often onpar with the translation based approaches, without the need for any additional translation as well as evaluation costs.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Although recent Massively Multilingual Language Models (MMLMs) like mBERT and XLMR support around 100 languages, most existing multilingual NLP benchmarks provide evaluation data in only a handful of these languages with little linguistic diversity. We argue that this makes the existing practices in multilingual evaluation unreliable and does not provide a full picture of the performance of MMLMs across the linguistic landscape. We propose that the recent work done in Performance Prediction for NLP tasks can serve as a potential solution in fixing benchmarking in Multilingual NLP by utilizing features related to data and language typology to estimate the performance of an MMLM on different languages. We compare performance prediction with translating test data with a case study on four different multilingual datasets, and observe that these methods can provide reliable estimates of the performance that are often onpar with the translation based approaches, without the need for any additional translation as well as evaluation costs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Recent years have seen a surge of transformer (Vaswani et al., 2017) based Massively Multilingual Language Models (MMLMs) like mBERT (Devlin et al., 2019) , XLM-RoBERTa (XLMR) (Conneau et al., 2020) , mT5 (Xue et al., 2021) , RemBERT (Chung et al., 2021) . These models are pretrained on varying amounts of data of around 100 linguistically diverse languages, and can in principle support fine-tuning on different NLP tasks for these languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 68, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 133, |
|
"end": 154, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 198, |
|
"text": "(Conneau et al., 2020)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 223, |
|
"text": "(Xue et al., 2021)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 226, |
|
"end": 254, |
|
"text": "RemBERT (Chung et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "These MMLMs are primarily evaluated for their performance on Sequence Labelling (Nivre et al., 2020; Pan et al., 2017) , Classification (Conneau et al., 2018; Yang et al., 2019; Ponti et al., 2020) , Question Answering (Artetxe et al., 2020; Lewis et al., 2020; Clark et al., 2020a) and Retrieval (Artetxe and Schwenk, 2019; Roy et al., 2020; Botha et al., 2020) tasks. However, most these tasks often cover only a handful of the languages supported by the MMLMs, with most tasks having test sets in fewer than 20 languages (cf. Figure 1b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 100, |
|
"text": "(Nivre et al., 2020;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 101, |
|
"end": 118, |
|
"text": "Pan et al., 2017)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 158, |
|
"text": "(Conneau et al., 2018;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 177, |
|
"text": "Yang et al., 2019;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 197, |
|
"text": "Ponti et al., 2020)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 241, |
|
"text": "(Artetxe et al., 2020;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 261, |
|
"text": "Lewis et al., 2020;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 282, |
|
"text": "Clark et al., 2020a)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 324, |
|
"text": "(Artetxe and Schwenk, 2019;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 342, |
|
"text": "Roy et al., 2020;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 343, |
|
"end": 362, |
|
"text": "Botha et al., 2020)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 529, |
|
"end": 539, |
|
"text": "Figure 1b)", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Evaluating on such benchmarks henceforth fails to provide a comprehensive picture of the model's performance across the linguistic landscape, as the performance of MMLMs has been shown to vary significantly with the amount of pre-training data available for a language (Wu and Dredze, 2020) , as well according to the typological relatedness between the pivot and target languages (Lauscher et al., 2020) . While designing benchmarks to contain test data for all 100 languages supported by the MMLMs is be the ideal standard for multilingual evaluation, doing so requires prohibitively large amount of human effort, time and money.", |
|
"cite_spans": [ |
|
{ |
|
"start": 269, |
|
"end": 290, |
|
"text": "(Wu and Dredze, 2020)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 381, |
|
"end": 404, |
|
"text": "(Lauscher et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Machine Translation can be one way to extend test sets in different benchmarks to a much larger set of languages. Hu et al. (2020) provides pseudo test sets for tasks like XQUAD and XNLI, obtained by translating English test data into different languages, and shows reasonable estimates of the actual performance by evaluating on translated data but cautions about their reliability when the model is trained on translated data. The accuracy of translation based evaluation can be affected by the quality of translation and the technique incurs non-zero costs to obtain reliable translations. Moreover, transferring labels with translation might also be non-trivial for certain tasks like Part of Speech Tagging and Named Entity Recognition.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 130, |
|
"text": "Hu et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recently, there has been some interest in predicting performance of NLP models without actually evaluating them on a test set. Xia et al. (2020) showed that it is possible to build regression models that can accurately predict evaluation scores of NLP models under different experimental settings using various linguistic and dataset specific features. Srinivasan et al. (2021) showed promising results specifically for MMLMs towards predicting their performance on downstream tasks for different languages in zero-shot and few-shot settings, and Ye et al. (2021) propose methods for more reliable performance prediction by estimating confidence intervals as well as predicting fine-grained performance measures. In this paper we argue that the performance prediction can be a possible avenue to address the current issues with Multilingual benchmarking by aiding in the estimation of performance of the MMLMs for the languages which lack any evaluation data for a given task. Not only this can help us give a better idea about the performance of a multilingual model on a task across a much larger set of languages and hence aiding in better model selection, but also enables applications in devising data collection strategies to maximize performance (Srinivasan et al., 2022) as well as in selecting the representative set of languages for a benchmark (Xia et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 144, |
|
"text": "Xia et al. (2020)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 353, |
|
"end": 377, |
|
"text": "Srinivasan et al. (2021)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 547, |
|
"end": 563, |
|
"text": "Ye et al. (2021)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 1253, |
|
"end": 1278, |
|
"text": "(Srinivasan et al., 2022)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1355, |
|
"end": 1373, |
|
"text": "(Xia et al., 2020)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We present a case study demonstrating the effectiveness of performance prediction on four multilingual tasks, PAWS-X (Yang et al., 2019) XNLI (Conneau et al., 2018) , XQUAD (Artetxe et al., 2020) and TyDiQA-GoldP (Clark et al., 2020a) and show that it can often provide reliable estimates of the performance on different languages on par with evaluating them on translated test sets without any additional translation costs. We also demonstrate an additional use case of this method in selecting the best pivot language for fine-tuning the MMLM in order to maximize performance on some target language. To encourage research in this area and provide easy access for the community to utilize this framework, we will release our code and the datasets that we use for the case study.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 136, |
|
"text": "(Yang et al., 2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 164, |
|
"text": "(Conneau et al., 2018)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 195, |
|
"text": "(Artetxe et al., 2020)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 234, |
|
"text": "(Clark et al., 2020a)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rise in popularity of MMLMs like mBERT and XLMR have also lead to an increasing interest in creating different multilingual benchmarks to evaluate these models. We analyzed 18 different multilingual datasets proposed between the years 2015 to 2021, by searching and filtering for datasets containing the term Cross Lingual in the Papers with Code Datasets repository. 1 The types and language specific statistics of these studied benchmarks can be found in Table 3 in appendix. As can be seen in Figure 1a , there does appear to be an increasing trend in the number of multilingual datasets proposed each year, especially with a sharp increase observed during the year 2020. However, if we look at the number of languages covered by these different benchmarks (Figure 1b) , we see that most of the tasks have fewer than 20 languages supported with a median of 11 languages per task which is substantially lower than the 100 supported by the commonly used MMLMs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 372, |
|
"end": 373, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 461, |
|
"end": 468, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 500, |
|
"end": 509, |
|
"text": "Figure 1a", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 764, |
|
"end": 775, |
|
"text": "(Figure 1b)", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Problem with Multilingual Benchmarking", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The only tasks which have been able to support a large fraction of these 100 languages are the Sequence Labelling tasks WikiANN (Pan et al., 2017) and Universal Dependencies (Nivre et al., 2020) which were a result of huge engineering, crowd sourcing and domain expertise efforts, and the Tatoeba dataset created from the parallel translation database maintained since more than 10 years, consisting of contributions from tens of thousands of members. However, we observed a dearth of supported languages in the remaining tasks that we surveyed, especially in NLU tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 128, |
|
"end": 146, |
|
"text": "(Pan et al., 2017)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 194, |
|
"text": "(Nivre et al., 2020)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Problem with Multilingual Benchmarking", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We also observe a clear lack of diversity in the selected languages across different multilingual datasets. Figure 1c shows the number of tasks each language supported by the mBERT is present in and we observe a clear bias towards high resource languages, mostly covering class 4 and class 5 languages identified according to the taxonomy provided by Joshi et al. (2020) . The low resource languages given by class 2 or lower are severely under-represented in the benchmarks where the most popular (in terms of number of tasks it appears in) class 2 language i.e. Swahili appears only in 5 out of 18 benchmarks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 351, |
|
"end": 370, |
|
"text": "Joshi et al. (2020)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 117, |
|
"text": "Figure 1c", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Problem with Multilingual Benchmarking", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We also categorized the the languages into the 6 major language families at the top level genetic groups 2 each of which cover at least 5% of the world's languages and plot language family wise representation of each task in Figure 2 . Except a couple of benchmarks, the majority of the languages present in these tasks are Indo-European, with very little representation from all the other language families which have either comparable or a higher language coverage as Indo-European.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 233, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Problem with Multilingual Benchmarking", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "There have been some recent benchmarks that address this issue of language diversity. The Ty-DiQA (Clark et al., 2020a) benchmark contains training and test datasets in 11 typologically diverse languages, covering 9 different language families. The XCOPA (Ponti et al., 2020) benchmark for causal commonsense reasoning also selects a set of 10 languages with high genealogical and areal diversities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 119, |
|
"text": "(Clark et al., 2020a)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Problem with Multilingual Benchmarking", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "2 https://www.ethnologue.com/guides/largest-families While this is a step in the right direction and does give a much better idea about the performance of MMLMs over a diverse linguistic landscape, it is still difficult to cover through 10 or 11 languages all the factors that influence the performance of an MMLM like pre-training size (Wu and Dredze, 2020; Lauscher et al., 2020) , typological relatedness (syntactic, genealogical, areal, phonological etc) between the source and pivot languages (Lauscher et al., 2020; Pires et al., 2019) , sub-word overlap (Wu and Dredze, 2019) , tokenizer quality (Rust et al., 2021) etc. Through Performance Prediction as we will see in next section, we seek to estimate the performance of an MMLMs on different languages based on these factors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 337, |
|
"end": 358, |
|
"text": "(Wu and Dredze, 2020;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 359, |
|
"end": 381, |
|
"text": "Lauscher et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 498, |
|
"end": 521, |
|
"text": "(Lauscher et al., 2020;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 522, |
|
"end": 541, |
|
"text": "Pires et al., 2019)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 561, |
|
"end": 582, |
|
"text": "(Wu and Dredze, 2019)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 603, |
|
"end": 622, |
|
"text": "(Rust et al., 2021)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Problem with Multilingual Benchmarking", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We would also like to point out that there are other problems with multilingual benchmarking as well. Recent multi-task multilingual benchmarks like X-GLUE (Liang et al., 2020) , XTREME (Hu et al., 2020) and XTREME-R mainly provide training datasets for different tasks only in English and evaluate for zero-shot transfer to other languages. However, this standard of using English as a default pivot language was put in question by Turc et al. (2021) , who showed empirically that German and Russian transfer more effectively to a set of diverse target languages. We shall see in the coming sections that the Performance Prediction approach can also be useful in identifying the best pivots for a target language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 176, |
|
"text": "(Liang et al., 2020)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 203, |
|
"text": "(Hu et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 451, |
|
"text": "Turc et al. (2021)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Problem with Multilingual Benchmarking", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We define Performance Prediction as the task of predicting performance of a machine learning model on different configurations of training and test data. Consider a multilingual model M pre-trained on a set of languages L, and a task T containing training datasets D p tr in languages p \u2208 P such that P \u2282 L and test datasets D t te in languages t \u2208 T such that T \u2282 L. Following Amini et al. 2009, we assume that both D p tr and D t te are the subsets of a multiview dataset D where each sample (x, y) \u2208 D has multiple views (defined in terms of languages) of the same object i.e. (x, y) def = {(x l , y l )|\u2200l \u2208 L} all of which are not observed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Prediction for Multilingual Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A training configuration for fine-tuning M is given by the tuple (\u03a0, \u2206 \u03a0 tr ), where \u03a0 \u2286 P and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Prediction for Multilingual Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2206 \u03a0 tr = p\u2208\u03a0 D p tr .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Prediction for Multilingual Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The performance on the test set", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Prediction for Multilingual Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "D t te for language t \u2208 T when M is fine-tuned on (\u03a0, \u2206 \u03a0 tr ) is denoted as s M,T,t,D t te ,\u03a0,\u2206 \u03a0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Prediction for Multilingual Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "tr or s for clarity, given as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Prediction for Multilingual Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s = g(M, T, t, D t te , \u03a0, \u2206 \u03a0 tr )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Performance Prediction for Multilingual Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In performance prediction we formulate estimating g by a parametric function f \u03b8 as a regression problem such that we can approximate s for various configurations with reasonable accuracy, given by s \u2248 f \u03b8 ([\u03d5(t); \u03d5(\u03a0); \u03d5(\u03a0, t); \u03d5(\u2206 \u03a0 tr )]) (2) where \u03d5(.) denotes the features representation of a given entity. Following Xia et al. (2020), we do not consider any features specific to M to focus more on how the performance varies for a given model with different data and language configurations. Since the languages for which we are trying to predict the performance might not have any data (labelled or unlabelled available), we also skip features for D t te from the equation. Note, we do consider coupled features for training and test languages i.e. \u03d5(\u03a0, t) as the interaction between the two has been shown to be a strong indicator of the performance of such models (Lauscher et al., 2020; Wu and Dredze, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 873, |
|
"end": 896, |
|
"text": "(Lauscher et al., 2020;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 897, |
|
"end": 917, |
|
"text": "Wu and Dredze, 2019)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Prediction for Multilingual Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Different training setups for multilingual models can be seen as special cases of our formulation. For zero-shot transfer we set \u03a0 = {p}, such that p \u0338 = t. This reduces the performance prediction problem to the one described in Lauscher et al. (2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 251, |
|
"text": "Lauscher et al. (2020)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Prediction for Multilingual Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "s zs \u2248 f \u03b8 ([\u03d5(t); \u03d5(p); \u03d5(p, t); \u03d5(\u2206 {p} tr )]) (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Prediction for Multilingual Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "There are many ways to represent the feature representations \u03d5(.) that have been explored in pre- vious work, including pre-training data size, typological relatedness between the pivot and target languages and more. For a complete list of features that we use in our experiments, refer to Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 290, |
|
"end": 297, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Performance Prediction for Multilingual Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To demonstrate the effectiveness of Performance Prediction in estimating the performance on different languages, we evaluate the approach on classification tasks i.e. PAWS-X and XNLI, and two Question Answering tasks XQUAD and TyDiQA-GoldP. We choose these tasks as their labels are transferable via translation, so we can compare our method with the automatic translation based approach. TyDiQA-GoldP has test sets for different languages created independently to combat the translationese problem (Clark et al., 2020b) , while the other three have English test sets manually translated to the other languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 499, |
|
"end": 520, |
|
"text": "(Clark et al., 2020b)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case Study", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For all the three tasks we try to estimate zero-shot performance of a fine-tuned mBERT model i.e. s zs on different languages. For PAWS-X, XNLI and XQUAD we have training data present only in English i.e. \u03a0 = {en} always, but TyDiQA-GoldP contains training sets in 9 different languages and we predict transfer from all of those. To train Performance Prediction models we use the performance data for mBERT provided in Hu et al. (2020) as well as train our own models when required and evaluate the performance on test dataset of different languages. The performance prediction models are evaluated using a leave one out strategy also called Leave One Language Out (LOLO) as used in Lauscher et al. (2020) ; Srinivasan et al. 2021, where we use the performance data of target languages in the set T \u2212{t} to predict the performance on a language t and do this for all t \u2208 T .", |
|
"cite_spans": [ |
|
{ |
|
"start": 419, |
|
"end": 435, |
|
"text": "Hu et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 683, |
|
"end": 705, |
|
"text": "Lauscher et al. (2020)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We compare the following methods for estimating the performance: 1. Average Score Baseline: In this method, to estimate the performance on a target language t we simply take a mean of the model's performance on the remaining T \u2212 {t} languages. Although conceptually simple, this is an unbiased estimate for the expected performance of the MMLM on different languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To estimate the performance on language t with this method, we automatically translate the test data in one of the languages t \u2032 \u2208 T \u2212 {t} , 3 to the target language t and evaluate the fine-tuned MMLM on the translated data. The performance on this pseudo-test set is used as the estimate of the actual performance. We use the Azure Translator 4 to translate the test sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translate:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "We consider two different regression models to estimate the perfor-mance in our experiments. i) XGBoost: We use the popular Tree Boosting algorithm XGBoost for solving the regression problem, which has been previously shown to achieve impressive results on the task (Xia et al., 2020; Srinivasan et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 266, |
|
"end": 284, |
|
"text": "(Xia et al., 2020;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 309, |
|
"text": "Srinivasan et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Predictors:", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "ii) Group Lasso: Group Lasso (Yuan and Lin, 2006 ) is a multi-task linear regression model that uses an l 1 /l q norm as a regularization term to ensure common sparsity patterns among the regression weights of different tasks. In our experiments, we use the performance data for all the tasks in the XTREME-R benchmark to train group lasso models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 48, |
|
"text": "(Yuan and Lin, 2006", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Predictors:", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The average LOLO errors for the four tasks and the four methods are given in Table 2 . As we can see both Translated baseline and Performance Predictors can obtain much lower errors compared to the Average Score Baseline on PAWS-X, XNLI and XQUAD tasks. Group Lasso outperforms all the other methods on PAWS-X dataset while for XNLI and XQUAD datasets though, the Translate method outperforms the two performance predictor models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 84, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "On TyDiQA-GoldP dataset , which had its test sets for different languages created independently without any translation, we see that the performance of Translate method drops with errors close to those obtained using the Average Score Baseline. While this behaviour is expected since the translated test sets and actual test sets now differ from each other, it still puts the reliability of the performance on translated data compared to the real data into question. Both XGBoost and Group Lasso though, obtain consistent improvements over the Baseline for TyDiQA-GoldP as well. Figure 3 provides a breakdown of the errors for each language included in TyDiQA-GoldP bench-mark, and again we can see that the Performance Predictors can outperform the Translate method almost all the languages except Telugu (te). Similar plots for the other tasks can be found in Figure 5 of Appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 579, |
|
"end": 587, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 862, |
|
"end": 870, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Another benefit of using Performance Prediction models is that we can use them to select training configurations like training (pivot) languages or amount of training data to achieve desired performance. For our case study we demonstrate the application of our predictors towards selecting the best pivot language for each of the 100 languages supported by mBERT that maximizes the predicted performance on the language. The optimization problem can be defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pivot Selection", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "p * (l) = arg max p\u2208P f \u03b8 ([\u03d5(l); \u03d5(p); \u03d5(p, l); \u03d5(\u2206 {p} tr )])", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pivot Selection", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "(4) Where p * (l) denotes the pivot language that results in the best predicted performance on language l \u2208 L. Since, P = {en} only for PAWS-X, XQUAD and XNLI i.e. training data is available only in English, we run this experiment on TyDiQA-GoldP dataset which has training data available in 9 languages i.e. P = {ar, bn, es, f i, id, ko, ru, sw, te}. We solve the optimization problem exactly by evaluating Equation 4 for all (p, l) pairs using a linear search and we use XGBoost Regressor as f \u03b8 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pivot Selection", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The results of this exercise are summarized in Figure 4 . We see carefully selecting the best pivot for each language leads to substantially higher estimated performances instead of using the same language as pivot for all the languages. We also see that languages like Finnish, Indonesian, Arabic and Russian have higher average predicted performance across all the supported languages compared to English. This observation is also in line with Turc et al. (2021) observation that English might not always be the best pivot language for zero-shot transfer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 446, |
|
"end": 464, |
|
"text": "Turc et al. (2021)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 55, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Pivot Selection", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In this paper we discussed how the current state of benchmarking multilingual models is fundamentally limited by the amount of languages supported by the existing benchmarks, and proposed Performance Prediction as a potential solution to address the problem. Based on the discussion we summarize our findings through three key takeaways Figure 4 : Average Performance on the 100 languages supported by mBERT for each of the 9 pivot languages for which training data is available in TyDiQA-GoldP.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 337, |
|
"end": 345, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "1. Training performance prediction models on the existing evaluation data available for a benchmark can be a simple yet effective solution in estimating the MMLM's performance on a larger set of supported languages, which can often lead to much closer estimates compared to using the expected value estimate obtained from the existing languages. 2. One should be careful in using translated data to evaluate a model's performance on a language. Our experiments suggest that the performance measures estimated from the translated data can miscalculate the actual performance on the real world data for a language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "3. Performance Prediction can not only be effective for benchmarking on a larger set of languages but can also aid in selecting training strategies to maximize the performance of the MMLM on a given language which can be valuable towards building more accurate multilingual models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Finally, there are a number of ways in which the current performance prediction methods can be improved for a more reliable estimation. Both Xia et al. (2020) ; Srinivasan et al. (2021) observed that these models can struggle to generalize on lan-guages or configurations that have features that are remarkably different from the training data. Multitask learning as hinted by Lin et al. (2019) and our experiments with Group Lasso can be a possible way to address this issue. The current methods also do not make use of model specific features for estimating the performance. Tran et al. (2019) ; Nguyen et al. (2020); You et al. (2021) explore certain measures like entropy values, maximum evidence derived from a pre-trained model to estimate the transferability of the learned representations. It can be worth exploring if such measures can be helpful in providing more accurate predictions. Table 3 contains the information about the tasks considered in the survey for Section 2. The language-wise errors for tasks other than TyDiQA-GoldP can be found in Figure 5 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 158, |
|
"text": "Xia et al. (2020)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 161, |
|
"end": 185, |
|
"text": "Srinivasan et al. (2021)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 377, |
|
"end": 394, |
|
"text": "Lin et al. (2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 577, |
|
"end": 595, |
|
"text": "Tran et al. (2019)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 896, |
|
"end": 903, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 1060, |
|
"end": 1068, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Performance Prediction Models 1. XGBoost: For training XGBoost regressor for the performance prediction, we use 100 estimators with a maximum depth of 10 and a learning rate of 0.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.1 Training Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use a regularization strength of 0.005 for the l 1 /l 2 norm term in the objective function, and use the implementation provided in the MuTaR software package 5 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Group Lasso:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Translate Baseline: We use the Azure Translator 6 to translate the data in pivot language to target languages. For classification tasks XNLI and PAWS-X, the labels can be directly transferred across the translations. For QA tasks XQUAD and TyDiQA we use the approach described in Hu et al. (2020) to obtain the answer span in the translated test which involves enclosing the answer span in the original text within <b> </b> tags to recover the answer in the translation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 280, |
|
"end": 296, |
|
"text": "Hu et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Group Lasso:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "1. Pre-training Size of a Language: The amount of data in a language l that was used to pre-train the MMLM. 2. Tokenizer Quality: We use the two metrics defined by Rust et al. (2021) to measure the quality of a multilingual tokenizer on a target language t. The first metric is Fertility which is equal to the average number of sub-words produced per tokenized word and the other is Percentage Continued Words which measures how often the tokenizer chooses to continue a word across at least two tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 182, |
|
"text": "Rust et al. (2021)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2 Features Description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3. Subword Overlap: The subword overlap between a pivot and target language is defined as the fraction of sub-words that are common in the vocabulary of the two languages. Let V p and V t be the subword vocabularies of p and t. The subword overlap is then defined as :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2 Features Description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "o sw (p, t) = |V p \u2229 V t | |V p \u222a V t | (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2 Features Description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "4. Relatedness between Lang2Vec features: Following Lin et al. (2019) and Lauscher et al. (2020) , we compute the typological relatedness between p and t from the linguistic features provided by the URIEL project (Littell et al., 2017) . We use syntactic (s syn (p, t)), phonological similarity (s pho (p, t)), genetic similarity (s gen (p, t)) and geographic distance ( ", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 69, |
|
"text": "Lin et al. (2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 74, |
|
"end": 96, |
|
"text": "Lauscher et al. (2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 235, |
|
"text": "(Littell et al., 2017)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2 Features Description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://paperswithcode.com/datasets", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for our experiments we use t \u2032 = p i.e. we use test data in pivot language which is often English to translate to t 4 https://azure.microsoft.com/en-us/services/cognitiveservices/translator/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/hichamjanati/mutar 6 https://azure.microsoft.com/en-us/services/cognitive-services/translator/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Learning from multiple partially observed views -an application to multilingual text categorization", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Massih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Amini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Usunier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goutte", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "22", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Massih R. Amini, Nicolas Usunier, and Cyril Goutte. 2009. Learning from multiple partially observed views -an application to multilingual text categoriza- tion. In Advances in Neural Information Processing Systems, volume 22. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "On the Cross-lingual Transferability of Monolingual Representations", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dani", |
|
"middle": [], |
|
"last": "Yogatama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of ACL 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the Cross-lingual Transferability of Mono- lingual Representations. In Proceedings of ACL 2020.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe and Holger Schwenk. 2019. Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond. Transactions of the ACL 2019.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Entity Linking in 100 Languages", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Botha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zifei", |
|
"middle": [], |
|
"last": "Shan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gillick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7833--7845", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.630" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan A. Botha, Zifei Shan, and Daniel Gillick. 2020. En- tity Linking in 100 Languages. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7833-7845, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Rethinking embedding coupling in pre-trained language models", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hyung Won", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thibault", |
|
"middle": [], |
|
"last": "Chung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Henry", |
|
"middle": [], |
|
"last": "Fevry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Tsai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hyung Won Chung, Thibault Fevry, Henry Tsai, Melvin Johnson, and Sebastian Ruder. 2021. Rethinking em- bedding coupling in pre-trained language models. In International Conference on Learning Representa- tions.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Garrette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vitaly", |
|
"middle": [], |
|
"last": "Nikolaev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennimaria", |
|
"middle": [], |
|
"last": "Palomaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Transactions of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020a. TyDi QA: A Bench- mark for Information-Seeking Question Answering in Typologically Diverse Languages. In Transactions of the Association of Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Garrette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vitaly", |
|
"middle": [], |
|
"last": "Nikolaev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennimaria", |
|
"middle": [], |
|
"last": "Palomaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "454--470", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00317" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020b. TyDi QA: A bench- mark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics, 8:454- 470.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Unsupervised cross-lingual representation learning at scale", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartikay", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8440--8451", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.747" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "XNLI: Evaluating crosslingual sentence representations", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruty", |
|
"middle": [], |
|
"last": "Rinott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adina", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of EMNLP 2018", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2475--2485", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross- lingual sentence representations. In Proceedings of EMNLP 2018, pages 2475-2485.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization", |
|
"authors": [ |
|
{ |
|
"first": "Junjie", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Siddhant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generaliza- tion. CoRR, abs/2003.11080.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The state and fate of linguistic diversity and inclusion in the NLP world", |
|
"authors": [ |
|
{ |
|
"first": "Pratik", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastin", |
|
"middle": [], |
|
"last": "Santy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amar", |
|
"middle": [], |
|
"last": "Budhiraja", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalika", |
|
"middle": [], |
|
"last": "Bali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Monojit", |
|
"middle": [], |
|
"last": "Choudhury", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6282--6293", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.560" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers", |
|
"authors": [ |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Lauscher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vinit", |
|
"middle": [], |
|
"last": "Ravishankar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Goran", |
|
"middle": [], |
|
"last": "Glava\u0161", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4483--4499", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.363" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anne Lauscher, Vinit Ravishankar, Ivan Vuli\u0107, and Goran Glava\u0161. 2020. From zero to hero: On the limitations of zero-shot language transfer with mul- tilingual Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 4483-4499, On- line. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "MLQA: Evaluating Cross-lingual Extractive Question Answering", |
|
"authors": [ |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barlas", |
|
"middle": [], |
|
"last": "Oguz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruty", |
|
"middle": [], |
|
"last": "Rinott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of ACL 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020. MLQA: Evalu- ating Cross-lingual Extractive Question Answering. In Proceedings of ACL 2020.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "XGLUE: A new benchmark datasetfor cross-lingual pre-training, understanding and generation", |
|
"authors": [ |
|
{ |
|
"first": "Yaobo", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nan", |
|
"middle": [], |
|
"last": "Duan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yeyun", |
|
"middle": [], |
|
"last": "Gong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ning", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fenfei", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weizhen", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Gong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linjun", |
|
"middle": [], |
|
"last": "Shou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daxin", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guihong", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruofei", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Agrawal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Cui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sining", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taroon", |
|
"middle": [], |
|
"last": "Bharti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Qiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiun-Hung", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Winnie", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuguang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Campos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rangan", |
|
"middle": [], |
|
"last": "Majumder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6008--6018", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.484" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark datasetfor cross-lingual pre-training, un- derstanding and generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6008-6018, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Antonios Anastasopoulos, Patrick Littell, and Graham Neubig", |
|
"authors": [ |
|
{ |
|
"first": "Yu-Hsiang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chian-Yu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zirui", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuyan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mengzhou", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shruti", |
|
"middle": [], |
|
"last": "Rijhwani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junxian", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhisong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuezhe", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3125--3135", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1301" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junx- ian He, Zhisong Zhang, Xuezhe Ma, Antonios Anas- tasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learn- ing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3125-3135, Florence, Italy. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "URIEL and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors", |
|
"authors": [ |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Littell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Mortensen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ke", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Kairis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlisle", |
|
"middle": [], |
|
"last": "Turner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lori", |
|
"middle": [], |
|
"last": "Levin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "8--14", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Patrick Littell, David R. Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. URIEL and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 8-14, Valencia, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Leep: A new measure to evaluate transferability of learned representations", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Cuong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Hassner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cedric", |
|
"middle": [], |
|
"last": "Seeger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Archambeau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cuong V. Nguyen, Tal Hassner, Matthias Seeger, and Cedric Archambeau. 2020. Leep: A new measure to evaluate transferability of learned representations.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Universal Dependencies v2: An evergrowing multilingual treebank collection", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Tyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4034--4043", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Jan Haji\u010d, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Marseille, France. European Language Resources Association.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Cross-lingual name tagging and linking for 282 languages", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoman", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boliang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "May", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Nothman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of ACL 2017", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1946--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Noth- man, Kevin Knight, and Heng Ji. 2017. Cross-lingual name tagging and linking for 282 languages. In Pro- ceedings of ACL 2017, pages 1946-1958.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "How multilingual is multilingual BERT?", |
|
"authors": [ |
|
{ |
|
"first": "Telmo", |
|
"middle": [], |
|
"last": "Pires", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Schlinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Garrette", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4996--5001", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1493" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Flo- rence, Italy. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "XCOPA: A multilingual dataset for causal commonsense reasoning", |
|
"authors": [ |
|
{ |
|
"first": "Goran", |
|
"middle": [], |
|
"last": "Edoardo Maria Ponti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Glava\u0161", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qianchu", |
|
"middle": [], |
|
"last": "Majewska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2362--2376", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.185" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edoardo Maria Ponti, Goran Glava\u0161, Olga Majewska, Qianchu Liu, Ivan Vuli\u0107, and Anna Korhonen. 2020. XCOPA: A multilingual dataset for causal common- sense reasoning. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 2362-2376, Online. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "LAReQA: Language-agnostic answer retrieval from a multilingual pool", |
|
"authors": [ |
|
{ |
|
"first": "Uma", |
|
"middle": [], |
|
"last": "Roy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Constant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rami", |
|
"middle": [], |
|
"last": "Al-Rfou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Barua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Phillips", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinfei", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5919--5930", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.477" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips, and Yinfei Yang. 2020. LAReQA: Language-agnostic answer retrieval from a multilin- gual pool. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5919-5930, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "XTREME-R: Towards more challenging and nuanced multilingual evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Constant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Botha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Siddhant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinlan", |
|
"middle": [], |
|
"last": "Fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pengfei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junjie", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Garrette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "10215--10245", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.emnlp-main.802" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ruder, Noah Constant, Jan Botha, Aditya Sid- dhant, Orhan Firat, Jinlan Fu, Pengfei Liu, Junjie Hu, Dan Garrette, Graham Neubig, and Melvin John- son. 2021. XTREME-R: Towards more challenging and nuanced multilingual evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10215-10245, Online and Punta Cana, Dominican Republic. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "How good is your tokenizer? on the monolingual performance of multilingual language models", |
|
"authors": [ |
|
{ |
|
"first": "Phillip", |
|
"middle": [], |
|
"last": "Rust", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Pfeiffer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "3118--3135", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.acl-long.243" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phillip Rust, Jonas Pfeiffer, Ivan Vuli\u0107, Sebastian Ruder, and Iryna Gurevych. 2021. How good is your tok- enizer? on the monolingual performance of multilin- gual language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 3118-3135, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Litmus predictor: An ai assistant for building reliable, high-performing and fair multilingual nlp systems", |
|
"authors": [ |
|
{ |
|
"first": "Anirudh", |
|
"middle": [], |
|
"last": "Srinivasan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gauri", |
|
"middle": [], |
|
"last": "Kholkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Kejriwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanuja", |
|
"middle": [], |
|
"last": "Ganu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandipan", |
|
"middle": [], |
|
"last": "Dandapat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunayana", |
|
"middle": [], |
|
"last": "Sitaram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Balakrishnan", |
|
"middle": [], |
|
"last": "Santhanam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Somak", |
|
"middle": [], |
|
"last": "Aditya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalika", |
|
"middle": [], |
|
"last": "Bali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Monojit", |
|
"middle": [], |
|
"last": "Choudhury", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2022, |
|
"venue": "Thirty-sixth AAAI Conference on Artificial Intelligence. AAAI. System Demonstration", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anirudh Srinivasan, Gauri Kholkar, Rahul Kejriwal, Tanuja Ganu, Sandipan Dandapat, Sunayana Sitaram, Balakrishnan Santhanam, Somak Aditya, Kalika Bali, and Monojit Choudhury. 2022. Litmus predictor: An ai assistant for building reliable, high-performing and fair multilingual nlp systems. In Thirty-sixth AAAI Conference on Artificial Intelligence. AAAI. System Demonstration.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Kalika Bali, and Monojit Choudhury. 2021. Predicting the performance of multilingual nlp models", |
|
"authors": [ |
|
{ |
|
"first": "Anirudh", |
|
"middle": [], |
|
"last": "Srinivasan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunayana", |
|
"middle": [], |
|
"last": "Sitaram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanuja", |
|
"middle": [], |
|
"last": "Ganu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandipan", |
|
"middle": [], |
|
"last": "Dandapat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2110.08875" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anirudh Srinivasan, Sunayana Sitaram, Tanuja Ganu, Sandipan Dandapat, Kalika Bali, and Monojit Choud- hury. 2021. Predicting the performance of multilin- gual nlp models. arXiv preprint arXiv:2110.08875.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Transferability and hardness of supervised classification tasks", |
|
"authors": [ |
|
{ |
|
"first": "Anh", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cuong", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Hassner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anh T. Tran, Cuong V. Nguyen, and Tal Hassner. 2019. Transferability and hardness of supervised classifica- tion tasks.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Revisiting the primacy of english in zero-shot cross-lingual transfer", |
|
"authors": [ |
|
{ |
|
"first": "Iulia", |
|
"middle": [], |
|
"last": "Turc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iulia Turc, Kenton Lee, Jacob Eisenstein, Ming-Wei Chang, and Kristina Toutanova. 2021. Revisiting the primacy of english in zero-shot cross-lingual transfer.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", |
|
"authors": [ |
|
{ |
|
"first": "Shijie", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "833--844", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1077" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Are all languages created equal in multilingual BERT?", |
|
"authors": [ |
|
{ |
|
"first": "Shijie", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 5th Workshop on Representation Learning for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "120--130", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.repl4nlp-1.16" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 120-130, Online. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Predicting performance for natural language processing tasks", |
|
"authors": [ |
|
{ |
|
"first": "Mengzhou", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonios", |
|
"middle": [], |
|
"last": "Anastasopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruochen", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8625--8646", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.764" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, and Graham Neubig. 2020. Predicting performance for natural language processing tasks. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, pages 8625- 8646, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer", |
|
"authors": [ |
|
{ |
|
"first": "Linting", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Constant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihir", |
|
"middle": [], |
|
"last": "Kale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rami", |
|
"middle": [], |
|
"last": "Al-Rfou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Siddhant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "483--498", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.naacl-main.41" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 483-498, On- line. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "PAWS-X: A cross-lingual adversarial dataset for paraphrase identification", |
|
"authors": [ |
|
{ |
|
"first": "Yinfei", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Tar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of EMNLP 2019", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3685--3690", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adversar- ial dataset for paraphrase identification. In Proceed- ings of EMNLP 2019, pages 3685-3690.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Towards more fine-grained and reliable NLP performance prediction", |
|
"authors": [ |
|
{ |
|
"first": "Zihuiwen", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pengfei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinlan", |
|
"middle": [], |
|
"last": "Fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3703--3714", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zihuiwen Ye, Pengfei Liu, Jinlan Fu, and Graham Neu- big. 2021. Towards more fine-grained and reliable NLP performance prediction. In Proceedings of the 16th Conference of the European Chapter of the Asso- ciation for Computational Linguistics: Main Volume, pages 3703-3714, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Logme: Practical assessment of pretrained models for transfer learning", |
|
"authors": [ |
|
{ |
|
"first": "Kaichao", |
|
"middle": [], |
|
"last": "You", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianmin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mingsheng", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaichao You, Yong Liu, Jianmin Wang, and Mingsheng Long. 2021. Logme: Practical assessment of pre- trained models for transfer learning.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Model selection and estimation in regression with grouped variables", |
|
"authors": [ |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology)", |
|
"volume": "68", |
|
"issue": "1", |
|
"pages": "49--67", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ming Yuan and Yi Lin. 2006. Model selection and esti- mation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49-67.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Number of multilingual tasks containing test data for each of the 106 languages supported by the MMLMs (mBERT, XLMR). The bars are shaded according to the class taxonomy proposed byJoshi et al. (2020)." |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Figure 1" |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Task wise distribution of language families i.e. fraction of languages belonging to a particular language for a task." |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Lin et al., 2019;Xia et al., 2020; Srinivasan et al., 2021)" |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Language Wise Errors (LOLO setting) for predicting performances on the TyDiQA-GoldP dataset." |
|
}, |
|
"FIGREF6": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Figure 5" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"text": "Features used to represent the languages and datasets used. For more details refer to Section A.2 in Appendix.", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"text": "Mean Absolute Errors (MAE) (scaled by 100 for readability) on the the three tasks for different methods of estimating performance.", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"text": "d geo (p, t)). For details, please seeLittell et al. (2017)", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Type</td><td>Release Year</td><td>Number of Lan-</td><td>Number of Lan-</td></tr><tr><td/><td/><td/><td>guages</td><td>guage Families</td></tr><tr><td>UDPOS</td><td>Structure Prediction</td><td>2015</td><td>57</td><td>13</td></tr><tr><td>WikiANN</td><td>Structure Prediction</td><td>2017</td><td>100</td><td>15</td></tr><tr><td>XNLI</td><td>Classification</td><td>2018</td><td>15</td><td>7</td></tr><tr><td>XCOPA</td><td>Classification</td><td>2020</td><td>10</td><td>10</td></tr><tr><td>XQUAD</td><td>Question Answering</td><td>2020</td><td>11</td><td>6</td></tr><tr><td>MLQA</td><td>Question Answering</td><td>2020</td><td>7</td><td>4</td></tr><tr><td>TyDiQA</td><td>Question Answering</td><td>2020</td><td>11</td><td>9</td></tr><tr><td>MewsliX</td><td>Retrieval</td><td>2020</td><td>11</td><td>5</td></tr><tr><td>LAReQA</td><td>Retrieval</td><td>2020</td><td>11</td><td>6</td></tr><tr><td>PAWSX</td><td colspan=\"2\">Sentence Classification 2019</td><td>7</td><td>4</td></tr><tr><td>BUCC</td><td>Retrieval</td><td>2016</td><td>4</td><td>2</td></tr><tr><td>MLDoc</td><td>Classification</td><td>2018</td><td>8</td><td>3</td></tr><tr><td>QALD-9</td><td>Question Answering</td><td>2022</td><td>9</td><td>2</td></tr><tr><td>xSID</td><td>Classification</td><td>2021</td><td>11</td><td>6</td></tr><tr><td colspan=\"2\">WikiNEuRal Structure Prediction</td><td>2021</td><td>8</td><td>1</td></tr><tr><td colspan=\"2\">WikiLingua Summarization</td><td>2020</td><td>18</td><td>9</td></tr><tr><td>XL-BEL</td><td>Retrieval</td><td>2021</td><td>10</td><td>7</td></tr><tr><td>Tatoeba</td><td>Retrieval</td><td>2019</td><td>73</td><td>14</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"text": "The list of tasks surveyed for the discussion in Section 2.", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Mean Absolute Error</td><td>0.02 0.04 0.06 0.08 0.10 0.12</td><td colspan=\"2\">Baseline (Overall Average) Translate-Test Error XGBoost Group Lasso</td><td/><td/><td/><td/><td/><td/><td/><td/><td>Mean Absolute Error</td><td colspan=\"2\">0.025 0.050 0.075 0.100 0.125 0.150 0.175</td><td colspan=\"4\">Baseline (Overall Average) Translate-Test Error XGBoost Group Lasso</td></tr><tr><td/><td>0.00</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">0.000</td><td/><td/><td/></tr><tr><td/><td/><td>de</td><td>es</td><td>fr</td><td colspan=\"2\">ja Language</td><td>ko</td><td/><td>zh</td><td colspan=\"2\">avg</td><td/><td/><td>zh</td><td>es</td><td>de</td><td colspan=\"2\">ar</td><td>ur</td><td>ru Language bg el fr</td><td>hi</td><td>sw</td><td>tr</td><td>vi</td><td>avg</td></tr><tr><td/><td colspan=\"11\">(a) Language-Wise Errors for PAWS-X dataset.</td><td/><td/><td colspan=\"5\">(b) Language-Wise Errors for XNLI dataset.</td></tr><tr><td/><td/><td/><td/><td/><td>Mean Absolute Error</td><td colspan=\"2\">0.04 0.06 0.08 0.10 0.12</td><td colspan=\"3\">Baseline (Overall Average) Translate-Test Error XGBoost Group Lasso</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">0.02</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">0.00</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">es</td><td>de</td><td>el</td><td>ru</td><td colspan=\"2\">tr Language ar</td><td>vi</td><td>zh</td><td colspan=\"2\">hi</td><td>avg</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |