ACL-OCL / Base_JSON /prefixD /json /deeplo /2022.deeplo-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:22:11.181012Z"
},
"title": "AfriTeVa: Extending \"Small Data\" Pretraining Approaches to Sequence-to-Sequence Models",
"authors": [
{
"first": "Odunayo",
"middle": [],
"last": "Ogundepo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Waterloo",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Akintunde",
"middle": [],
"last": "Oladipo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Waterloo",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Mofetoluwa",
"middle": [],
"last": "Adeyemi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Waterloo",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Kelechi",
"middle": [],
"last": "Ogueji",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Waterloo",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Waterloo",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Pretrained language models represent the state of the art in NLP, but the successful construction of such models often requires large amounts of data and computational resources. Thus, the paucity of data for low-resource languages impedes the development of robust NLP capabilities for these languages. There has been some recent success in pretraining encoderonly models solely on a combination of lowresource African languages, exemplified by AfriBERTa. In this work, we extend the approach of \"small data\" pretraining to encoderdecoder models. We introduce AfriTeVa, a family of sequence-to-sequence models derived from T5 that are pretrained on 10 African languages from scratch. With a pretraining corpus of only around 1GB, we show that it is possible to achieve competitive downstream effectiveness for machine translation and text classification, compared to larger models trained on much more data. All the code and model checkpoints described in this work are publicly available at https://github.com/castorini/ afriteva.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Pretrained language models represent the state of the art in NLP, but the successful construction of such models often requires large amounts of data and computational resources. Thus, the paucity of data for low-resource languages impedes the development of robust NLP capabilities for these languages. There has been some recent success in pretraining encoderonly models solely on a combination of lowresource African languages, exemplified by AfriBERTa. In this work, we extend the approach of \"small data\" pretraining to encoderdecoder models. We introduce AfriTeVa, a family of sequence-to-sequence models derived from T5 that are pretrained on 10 African languages from scratch. With a pretraining corpus of only around 1GB, we show that it is possible to achieve competitive downstream effectiveness for machine translation and text classification, compared to larger models trained on much more data. All the code and model checkpoints described in this work are publicly available at https://github.com/castorini/ afriteva.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Transfer learning has driven many recent advances in natural language processing, and leveraging pretrained models for downstream tasks has produced state-of-the-art results on many tasks. These results can be attributed to general-purpose knowledge that is gained when a model is pretrained on a data-rich task (Raffel et al., 2020) . This paradigm also extends to multilingual settings, where a model is pretrained on text in multiple languages and then fine-tuned for downstream tasks in those languages. Some of these models, for example, mBERT and XML-R (Conneau et al., 2020) , have been trained on large combination of languages comprised of high-resource and low-resource languages, amounting to many gigabytes of data.",
"cite_spans": [
{
"start": 312,
"end": 333,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF27"
},
{
"start": 559,
"end": 581,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to the effectiveness of transfer learning on downstream tasks, T5 (Raffel et al., 2020) introduced a unified framework where all NLP tasks can be framed as a text-to-text problem, enabling us to train a single model for multiple tasks. This framework is simple and effective by enabling knowledge transfer from high-resource to low-resource tasks (Nagoudi et al., 2022) . Unlike BERT-based models, which are encoder-only models, T5 and its multilingual variants such as mT5 (Xue et al., 2021b) and byT5 (Xue et al., 2021a) are encoder-decoder models that are more suited for natural language tasks involving generation. Both mT5 and byT5 were trained on 100+ languages, of which only 13 were low-resource African languages, making up less than 6% of the total training data. Despite the existence of 2000+ African languages (Eberhard et al., 2019) , only a few of them are featured in pretraining, and thus it is unclear how effective these models generalize to those languages.",
"cite_spans": [
{
"start": 70,
"end": 91,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF27"
},
{
"start": 351,
"end": 373,
"text": "(Nagoudi et al., 2022)",
"ref_id": null
},
{
"start": 478,
"end": 497,
"text": "(Xue et al., 2021b)",
"ref_id": null
},
{
"start": 507,
"end": 526,
"text": "(Xue et al., 2021a)",
"ref_id": null
},
{
"start": 828,
"end": 851,
"text": "(Eberhard et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paucity of data for many African languages has been a stumbling block for developing robust NLP capabilities. However, some works have shown that it is possible to train language models with smaller amounts of data, albeit on encoderonly models. For example, Micheli et al. (2020) obtained good results on the French Question Answering Dataset (FQuAD) by pretraining on as little as 100MB of text. Directly related to our present study, AfriBERTa (Ogueji et al., 2021) pretrained a RoBERTa-based model from scratch on 10 African languages with only around 1GB of data, outperforming mBERT and XLM-R on tasks in several languages. Given this context, we pose the following research question:",
"cite_spans": [
{
"start": 263,
"end": 284,
"text": "Micheli et al. (2020)",
"ref_id": "BIBREF21"
},
{
"start": 451,
"end": 472,
"text": "(Ogueji et al., 2021)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Research Question: Can \"small data\" pretraining for low-resource African languages exemplified by AfriBERTa be extended from encoder-only models to encoder-decoder models?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To answer this research question, we pretrained encoder-decoder models in low-resource settings using relatively little data and evaluated our models against other models that have been pretrained on much more data. We introduce AfriTeVa, a family of pretrained transformer-based sequenceto-sequence models derived from T5, pretrained on 10 low-resource African languages. AfriTeVa gets its name from the fact that \"V\" is the Roman numeral for \"5\", which reflects its membership in the T5 family. We pretrained from random initialization with only around 1GB of data (using the same corpus as AfriBERTa) and evaluated our models on text classification and machine translation. To the best of our knowledge, this is the first encoderdecoder model pretrained solely on low-resource African languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With respect to our research question, our results are suggestive but not conclusive. AfriTeVa demonstrates better results than mT5, but falls short of other models pretrained with richer resources. However, existing experiments conflate several factors that we have not successfully untangled. Nevertheless, our preliminary study sets the ground for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Interest in low-resource African languages has increased in recent years. However, the question of how NLP capabilities can be scaled to many of these languages has yet to be answered fully (Nekoto et al., 2020) . Adebara and Abdul-Mageed (2022) highlighted the challenges of using and extending current NLP technologies to communities with different fabrics and languages. A common characteristic of African languages is the absence of large monolingual data for pretraining, which directly impacts the ability to build high-quality language models for these languages.",
"cite_spans": [
{
"start": 190,
"end": 211,
"text": "(Nekoto et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLP for African Languages",
"sec_num": "2.1"
},
{
"text": "Some of the more recent work in benchmarking and advancing the state of machine translation for African languages include the following: Adelani et al. (2022) investigated how to best leverage existing pretrained models for machine translation in 16 languages. They also released a corpus comprising machine translation data in all 16 languages. Emezue and Dossou (2021) released MMTAfrica, which is a many-to-many multilingual translation system for 6 African languages. Duh et al. (2020) provided a benchmark state-of-the-art neural ma-chine translation system on two African languages, Somali and Swahili, while Martinus and Abbott (2019) leveraged current neural machine translation techniques to train translation models for 5 African languages.",
"cite_spans": [
{
"start": 472,
"end": 489,
"text": "Duh et al. (2020)",
"ref_id": "BIBREF9"
},
{
"start": 615,
"end": 641,
"text": "Martinus and Abbott (2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLP for African Languages",
"sec_num": "2.1"
},
{
"text": "Some researchers have been interested in methods to adapt already pretrained models to unseen languages, thus enabling the ability to pretrain in high-resource settings and extend to low-resource languages. Liu et al. (2021) introduced a continual pretraining framework to adapt the mBART model for machine translation to unseen languages, while Baziotis et al. (2020) incorporated an LM as a prior by adding a regularization term for low-resource machine translation.",
"cite_spans": [
{
"start": 207,
"end": 224,
"text": "Liu et al. (2021)",
"ref_id": "BIBREF34"
},
{
"start": 346,
"end": 368,
"text": "Baziotis et al. (2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLP for African Languages",
"sec_num": "2.1"
},
{
"text": "XLM-R (Conneau et al., 2020) , mBERT, and mT5 (Xue et al., 2021b) have extended masked language modelling to multilingual settings by jointly pretraining large transformer models on up to 100+ languages. This work demonstrates the effectiveness of multilingual models on downstream tasks, even for low-resource languages. This has been attributed to shared vocabulary items, generalizable representations the model learns (Artetxe et al., 2020) , and model architectures (K et al., 2020) .",
"cite_spans": [
{
"start": 6,
"end": 28,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 46,
"end": 65,
"text": "(Xue et al., 2021b)",
"ref_id": null
},
{
"start": 422,
"end": 444,
"text": "(Artetxe et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 471,
"end": 487,
"text": "(K et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pretrained Models",
"sec_num": "2.2"
},
{
"text": "Still, these models contain only a handful of African languages. Ogueji et al. (2021) explored the viability of pretraining multilingual models from scratch using only limited amounts of data on a number of African languages-this is the \"small data\" pretraining approach we referred to in the introduction. They demonstrated the competitiveness of this \"small data\" approach and released comparatively smaller models that match and in some cases exceed the effectiveness of larger models pretrained on much more data. As a follow-up, Oladipo et al. (2022) explored the effect of vocabulary size and other factors affecting transfer in AfriBERTa-based models. Our work builds on this thread: We wondered if the approach taken by AfriBERTa can be extended to encoder-decoder models.",
"cite_spans": [
{
"start": 65,
"end": 85,
"text": "Ogueji et al. (2021)",
"ref_id": "BIBREF24"
},
{
"start": 534,
"end": 555,
"text": "Oladipo et al. (2022)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pretrained Models",
"sec_num": "2.2"
},
{
"text": "Following the T5 architecture (Raffel et al., 2020) , we consider 3 model sizes for AfriTeVa: small (64M parameters), base (229M parameters), and large (745M parameters). Each model is similar in configuration to their T5 counterparts. ",
"cite_spans": [
{
"start": 30,
"end": 51,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "To adapt the T5 architecture (Raffel et al., 2020; Xue et al., 2021b) to African languages, we pretrained AfriTeVa on the AfriBERTa corpus (Ogueji et al., 2021) , a multilingual corpus comprising 10 low-resource African languages: Afaan Oromoo, Amharic, Gahuza, Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya, and Yor\u00f9b\u00e1. Table 2 presents characteristics of text in each language in more detail. As we can see, the languages vary in terms of morphology and typology. Amharic, Somali, and Tigrinya have subject-object-verb (SOV) word order while the other languages have subjectverb-object (SVO) word order. The languages also belong to different written scripts, another aspect of diversity. In addition to AfriTeVa pretrained with only African languages, we also pretrained another model jointly with English and the 10 languages listed above. We sampled 1,500,000 English sentences from the Common Crawl 1 to match the language with the most sentences, which is Swahili. Our models were pretrained with a vocabulary size of 70,000 tokens learned using a SentencePiece unigram subword tokenizer (Kudo and Richardson, 2018) . The model that includes English in pretraining used a different tokenizer with the same vocabulary size.",
"cite_spans": [
{
"start": 29,
"end": 50,
"text": "(Raffel et al., 2020;",
"ref_id": "BIBREF27"
},
{
"start": 51,
"end": 69,
"text": "Xue et al., 2021b)",
"ref_id": null
},
{
"start": 139,
"end": 160,
"text": "(Ogueji et al., 2021)",
"ref_id": "BIBREF24"
},
{
"start": 1105,
"end": 1132,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 331,
"end": 338,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Pretraining",
"sec_num": "3.1"
},
{
"text": "We pretrained AfriTeVa using the masked language modelling \"span-corruption\" training objective in T5, where consecutive spans of dropped-out tokens are replaced by a single sentinel token that does not correspond to any wordpiece in the tokenizer. We pretrained our models for 500,000 steps with effective batch sizes shown in Table 1 . Model perplexity during training was evaluated on varying 1 https://data.statmt.org/cc-100/ amounts of sentences sampled from the different languages, consisting of roughly 440,000 sentences for the models without English, and 540,000 sentences for the model with English.",
"cite_spans": [],
"ref_spans": [
{
"start": 328,
"end": 335,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Pretraining",
"sec_num": "3.1"
},
{
"text": "All pretraining and fine-tuning experiments were conducted using the Huggingface transformers library (Wolf et al., 2020 ) on a TPU VM of type v3-8 provisioned on Google Cloud using the JAX/FLAX framework. All models were pretrained using a learning rate of 3e-4 and a maximum sequence length of 512 tokens using the Adafactor optimizer (Shazeer and Stern, 2018) .",
"cite_spans": [
{
"start": 102,
"end": 120,
"text": "(Wolf et al., 2020",
"ref_id": "BIBREF31"
},
{
"start": 337,
"end": 362,
"text": "(Shazeer and Stern, 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretraining",
"sec_num": "3.1"
},
{
"text": "Given the lack of benchmark datasets that would be appropriate for sequence-to-sequence models for low-resource African languages, we focused on two downstream tasks: machine translation and text classification. Text Classification: We performed text classification on news title topic classification datasets for Hausa and Yor\u00f9b\u00e1 (Hedderich et al., 2020) . The authors established strong baselines using multilingual pretrained language models and multilingual pretrained language models + English adaptive finetuning. We cast the text classification task into a text-to-text format where the decoder generates two tokens; the class token and an end-of-sequence token. More precisely, the text classification task is framed as:",
"cite_spans": [
{
"start": 331,
"end": 355,
"text": "(Hedderich et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning",
"sec_num": "3.2"
},
{
"text": "input: sentence [eos] output: label [eos]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning",
"sec_num": "3.2"
},
{
"text": "We do not use a task prefix for these experiments. In cases where the class labels are in a language not seen during pretraining or do not exist as a single token in the vocabulary, we replace them with randomly chosen tokens from the vocabulary and fine-tune. During inference, we map the tokens back to the initial labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning",
"sec_num": "3.2"
},
{
"text": "To fine-tune our models, we used PyTorch Lightning with a batch-size of 16, a constant learning rate of 0.0003, and the Adam optimizer. We report F 1 scores averaged over 3 runs with different random seeds. Machine Translation: We fine-tuned and evaluated all models on machine translation datasets in the news domain, focusing on 7 African languages. We used publicly available parallel data for the following languages: Hausa (6k sentences), 2 Igbo (10k sentences) (Ezeani et al., 2020) , Yor\u00f9b\u00e1 (10k sentences) (Adelani et al., 2021) , Swahili (30k sentences), 3 Luganda (7k sentences), Luo (7k sentences) and Pcm (8k sentences) (Adelani et al., 2022) . The datasets contain train, dev, and test folds for the individual languages. All machine translation corpora are publicly available. 4 To fine-tune our models for machine translation, we trained for 10 epochs using a beam size of 10 and a constant learning rate of 0.0003. As is standard, BLEU score (Papineni et al., 2002) was the evaluation metric.",
"cite_spans": [
{
"start": 467,
"end": 488,
"text": "(Ezeani et al., 2020)",
"ref_id": null
},
{
"start": 514,
"end": 536,
"text": "(Adelani et al., 2021)",
"ref_id": null
},
{
"start": 632,
"end": 654,
"text": "(Adelani et al., 2022)",
"ref_id": null
},
{
"start": 791,
"end": 792,
"text": "4",
"ref_id": null
},
{
"start": 958,
"end": 981,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning",
"sec_num": "3.2"
},
{
"text": "Here we compare AfriTeVa with existing multilingual language models that were pretrained on low-resource African languages. Table 3 shows a high-level breakdown of model features.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Models Comparisons",
"sec_num": "3.3"
},
{
"text": "mT5 (Xue et al., 2021b ) is a multilingual variant of T5 (Raffel et al., 2020) that was pretrained on 107 languages, but includes only 13 African languages, making up less than 6% of the training corpus.",
"cite_spans": [
{
"start": 4,
"end": 22,
"text": "(Xue et al., 2021b",
"ref_id": null
},
{
"start": 57,
"end": 78,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models Comparisons",
"sec_num": "3.3"
},
{
"text": "byT5 (Xue et al., 2021a ) is a transformer pretrained on byte sequences using the same corpora as mT5; its model size is similar to mT5 and T5.",
"cite_spans": [
{
"start": 5,
"end": 23,
"text": "(Xue et al., 2021a",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models Comparisons",
"sec_num": "3.3"
},
{
"text": "AfriMT5 and AfriByT5 (Adelani et al., 2022) are multilingual sequence-to-sequence models that were adapted from mT5 and byT5, respectively. These models were further pretrained on 18 African languages plus English and French, starting from existing mT5 and byT5 checkpoints. Conneau et al., 2020) is an encoder-only model based on RoBERTa (Zhuang et al., 2021) . It was pretrained on a corpus consisting of 100 languages, of which only 8 were African languages.",
"cite_spans": [
{
"start": 21,
"end": 43,
"text": "(Adelani et al., 2022)",
"ref_id": null
},
{
"start": 275,
"end": 296,
"text": "Conneau et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 339,
"end": 360,
"text": "(Zhuang et al., 2021)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models Comparisons",
"sec_num": "3.3"
},
{
"text": "AfriBERTa (Ogueji et al., 2021) is also an encoderonly model based on RoBERTa, pretrained from scratch with \"small data\", as already discussed.",
"cite_spans": [
{
"start": 10,
"end": 31,
"text": "(Ogueji et al., 2021)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-R (",
"sec_num": null
},
{
"text": "M2M-100 (Fan et al., 2021 ) is a multilingual encoder-decoder model that was pretrained for many-to-many multilingual translation using parallel data in 100 languages. M2M-100 can translate directly between any pair of the 100 languages covered in training, including 18 African languages.",
"cite_spans": [
{
"start": 8,
"end": 25,
"text": "(Fan et al., 2021",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-R (",
"sec_num": null
},
{
"text": "mBART50 (Tang et al., 2020 ) is a multilingual encoder-decoder model trained for machine translation in 50 languages. The model was fine-tuned on many translation directions at the same time, and covers 3 African languages in pretraining.",
"cite_spans": [
{
"start": 8,
"end": 26,
"text": "(Tang et al., 2020",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-R (",
"sec_num": null
},
{
"text": "We present our machine translation results in Table 4 and Table 5 . We compared the results of different sequence-to-sequence models fine-tuned for two directions, to and from English, for each language in our dataset. Evaluation was performed on both the model variants pretrained only with the AfriBERTa corpus as well as the variant that includes English in the pretraining corpus. For comparison, machine Translation results for mT5, byT5, AfriMT5, AfriByT5, mBART50, and M2M-100 were copied from Adelani et al. (2022) .",
"cite_spans": [
{
"start": 501,
"end": 522,
"text": "Adelani et al. (2022)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 46,
"end": 65,
"text": "Table 4 and Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "4.1"
},
{
"text": "African Languages Covered XLM-R (Conneau et al., 2020) 270M Encoder-only Afaan Oromoo, Afrikaans, Amharic, Hausa, Malagasy, Somali, Swahili, Xhosa AfriBERTa (Ogueji et al., 2021) 112M",
"cite_spans": [
{
"start": 32,
"end": 54,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 157,
"end": 178,
"text": "(Ogueji et al., 2021)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "# Params Model Family",
"sec_num": null
},
{
"text": "Encoder-only Afaan Oromoo, Amharic, Gahuza, Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya, Yor\u00f9b\u00e1 mT5 (Xue et al., 2021b) 582M Encoder-Decoder Afrikaans, Amharic, Chichewa, Hausa, Igbo, Malagasy, Somali, Shona, Sotho, Swahili, Xhosa, Yor\u00f9b\u00e1, Zulu byT5 (Xue et al., 2021a) 582M Encoder-Decoder Afrikaans, Amharic, Chichewa, Hausa, Igbo, Malagasy, Somali, Shona, Sotho, Swahili, Xhosa, Yor\u00f9b\u00e1, Zulu AfriMT5, 582M Encoder-Decoder Afrikaans, Amharic, Arabic, Chichewa, Hausa, Igbo, AfriByT5 (Adelani et al., 2022) Malagasy, Oromo, Nigerian Pidgin, Rwanda-Rundi, Sesotho, Shona, Somali, Swahili, Xhosa, Yor\u00f9b\u00e1, Zulu mBART50 (Tang et al., 2020) 610M Encoder-Decoder Afrikaans, Swahili, Xhosa M2M-100 (Fan et al., Focusing on variants of AfriTeVa, we find improved BLEU scores on all languages as we scale up our models. In both translation directions for most languages, we obtain our best BLEU scores using AfriTeVa base + En. Only when translating English into Nigerian Pidgin do we see a drop in BLEU score for AfriTeVa base + En. In Table 5 , scores improved by an average of 3 points as we go from small to large when translating from English to the various African languages. When translating to English, we observed average improvements of 4 points. With AfriTeVa large, scores improved by an extra BLEU point over AfriTeVa base.",
"cite_spans": [
{
"start": 112,
"end": 131,
"text": "(Xue et al., 2021b)",
"ref_id": null
},
{
"start": 262,
"end": 281,
"text": "(Xue et al., 2021a)",
"ref_id": null
},
{
"start": 497,
"end": 519,
"text": "(Adelani et al., 2022)",
"ref_id": null
},
{
"start": 629,
"end": 648,
"text": "(Tang et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 704,
"end": 716,
"text": "(Fan et al.,",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1041,
"end": 1048,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "# Params Model Family",
"sec_num": null
},
{
"text": "What do these empirical results say with respect to our research question? The most pertinent comparison is between mT5 and AfriTeVa base + En: the former is pretrained on 100+ languages while the latter is only pretrained on the much smaller AfriBERTa corpus. The fact that AfriTeVa base + En outperforms mT5 (with a smaller model, no less) suggests the viability of the \"small data\" pretraining approach, so in this respect, these experimental results affirm our hypothesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "# Params Model Family",
"sec_num": null
},
{
"text": "The situation, however, is a bit more complex. AfriMT5, which starts with the mT5 backbone and performs further pretraining, outperforms AfriTeVa base + En. The AfriMT5 pretraining corpus comprises 12GB data in 20 languages, including English and French. This suggests that massive multilanguage pretraining remains useful as model initialization, which in turn would suggest that \"small data\" pretraining still cannot compete. However, this is not a fair comparison for at least two reasons: (1) AfriMT5 is a larger model, and (2) the pretraining corpus of AfriMT5 is much larger than the 1GB AfriBERTa corpus. Thus, a fair comparison would be pretraining with the AfriMT5 corpus from scratch with the same model size as mT5. We leave this for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "# Params Model Family",
"sec_num": null
},
{
"text": "The effectiveness of byT5 and AfriByT5 further complicates our analysis. We see that byT5 alone achieves excellent BLEU scores. AfriByT5, which benefits from additional pretraining starting from a byT5 backbone, is only marginally better. In particular, byT5 appears to generate high-quality output for Luganda and Luo, two languages that it had never encountered before during pretraining. These results suggest that tokenization is consequential in ways we do not yet fully understand. Once again, this is interesting future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "# Params Model Family",
"sec_num": null
},
{
"text": "We provide evaluation results for M2M-100 and mBART50 only as a reference, since we do not feel that they represent fair comparisons. All models discussed above derive from the T5 family, and thus it is easier to isolate the source of the translation quality differences. For comparisons to M2M-100 and mBART50, it is difficult to perform attribution analysis to understand the underlying factors contributing to effectiveness. Furthermore, both of these models are specialized for machine translation, whereas the T5-based models can be adapted to multiple downstream tasks. Table 4 : Machine Translation Results (lang-en) : BLEU scores when translating from each African language to English. All models were fine-tuned on each language using data in the news domain. Checkmarks indicate that the model was pretrained on that language. AfriMT5 and AfriByT5 were further pretrained using the mT5 base and byT5 base checkpoints, respectively (Adelani et al., 2022 Table 5 : Machine Translation Results (en-lang) : BLEU scores when translating from English to each African language. All models were fine-tuned on each language using data in the news domain. Checkmarks indicate that the model was pretrained on that language. AfriMT5 and AfriByT5 were pretrained further using the mT5 base and byT5 base checkpoints, respectively (Adelani et al., 2022) . The highest reported BLEU scores are shown in bold for T5 models; overall best BLEU scores are underlined. Table 6 : Text Classification Results: F 1 scores averaged over 3 random seeds. mBERT, XLM-R, and AfriBERTa results were obtained from Ogueji et al. (2021) .",
"cite_spans": [
{
"start": 941,
"end": 962,
"text": "(Adelani et al., 2022",
"ref_id": null
},
{
"start": 1328,
"end": 1350,
"text": "(Adelani et al., 2022)",
"ref_id": null
},
{
"start": 1595,
"end": 1615,
"text": "Ogueji et al. (2021)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 576,
"end": 583,
"text": "Table 4",
"ref_id": null
},
{
"start": 963,
"end": 970,
"text": "Table 5",
"ref_id": null
},
{
"start": 1460,
"end": 1467,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "# Params Model Family",
"sec_num": null
},
{
"text": "Text classification F 1 results are presented in Table 6, based on the experimental settings described in Section 3.2. Note that while it is possible to adapt sequence-to-sequence models for classification tasks, as we have done, intuitively, encoderonly models are more suitable for text classification tasks. AfriTeVa small outperforms mBERT and XLM-R on both languages despite having significantly fewer parameters. However, AfriTeVa base is still outperformed by AfriBERTa large by an average of 3 F 1 points on Yor\u00f9b\u00e1 and 2 F 1 points on Hausa. Our models also perform better than mT5 on both languages. As with machine translation, we see improvements as we scale our model from 64M parameters to 745M parameters. However, the gains are modest here. What do these text classification results say with respect to our research question? Once again, the pertinent comparison is between mT5 and Afri-TeVa, since we are primarily concerned with the viability of \"small data\" pretraining. Here, our results are consistent with the machine translation experiments: it does appear that we can pretrain full encoder-decoder models from scratch using relatively small amounts of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Classification",
"sec_num": "4.2"
},
{
"text": "Encoder-decoder models are best suited for natural language generation tasks such as summarization, question answering, machine translation, etc. Cross-lingual datasets are often used as benchmarks to evaluate multilingual pretrained models. Despite our efforts to evaluate on as many tasks as possible, many existing datasets feature few to no African languages. For example, popular crosslingual datasets such as WikiLingua (Ladhak et al., 2020) , XQuAD (Artetxe et al., 2020) , and Tydi QA (Clark et al., 2020) only contain Swahili.",
"cite_spans": [
{
"start": 426,
"end": 447,
"text": "(Ladhak et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 456,
"end": 478,
"text": "(Artetxe et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 493,
"end": 513,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "4.3"
},
{
"text": "Existing machine translation systems in many low-resource languages require much larger parallel corpora to improve translation quality. Exam-ples include languages such as Yor\u00f9b\u00e1, Igbo, and Luganda. To improve such systems, there is a need for high-quality data in multiple domains. While there are existing efforts to curate parallel datasets such as JW300 (Agi\u0107 and Vuli\u0107, 2019) , Yor\u00f9b\u00e1 (Adelani et al., 2021) , Igbo (Ezeani et al., 2020) , Fon (Emezue and Dossou, 2020), parallel corpora for bi-directional translation in Amharic, Tigrigna, Afan-Oromo, Wolaytta, and Ge'ez (Teferra Abate et al., 2018) , there is a need for continued research to creating high-quality datasets to further drive advances in low-resource machine translation (Fan et al., 2021) .",
"cite_spans": [
{
"start": 359,
"end": 381,
"text": "(Agi\u0107 and Vuli\u0107, 2019)",
"ref_id": "BIBREF3"
},
{
"start": 391,
"end": 413,
"text": "(Adelani et al., 2021)",
"ref_id": null
},
{
"start": 421,
"end": 442,
"text": "(Ezeani et al., 2020)",
"ref_id": null
},
{
"start": 578,
"end": 606,
"text": "(Teferra Abate et al., 2018)",
"ref_id": "BIBREF30"
},
{
"start": 744,
"end": 762,
"text": "(Fan et al., 2021)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "4.3"
},
{
"text": "In this work, we present AfriTeVa, a family of multilingual T5 models that were pretrained from scratch on 10 low-resource African languages with only around 1GB of data (with an additional variant model that includes English data in pretraining). Answering our research question, we have verified that it is possible to pretrain encoder-decoder models on relatively small amounts of data, but there remain conflating factors we have yet to fully understand. Although we do not reach the state of the art, our models achieve competitive results on text classification and machine translation benchmarks. We also highlight some of the limitations of evaluating sequence-to-sequence models for African languages. Finally, we release code and pretrained models to drive further work in multilingual models for African languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "https://www.statmt.org/wmt21/translation-task.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://opus.nlpl.eu/GlobalVoices.php 4 https://github.com/masakhane-io/lafand-mt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada and an AI for Social Good grant from the Waterloo AI Institute. Computational resources were provided by Compute Ontario and Compute Canada. We also thank the Google TRC program for providing us free cloud TPU access.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Towards afrocentric NLP for African languages: Where we are and where we can go",
"authors": [],
"year": null,
"venue": "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "3814--3841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ife Adebara and Muhammad Abdul-Mageed. 2022. To- wards afrocentric NLP for African languages: Where we are and where we can go. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 3814-3841, Dublin, Ireland. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Ayodele Esther Awokoya, and Cristina Espa\u00f1a-Bonet. 2021. The effect of domain and diacritics in Yoruba-English neural machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Adelani",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Ruiter",
"suffix": ""
},
{
"first": "Jesujoba",
"middle": [],
"last": "Alabi",
"suffix": ""
},
{
"first": "Damilola",
"middle": [],
"last": "Adebonojo",
"suffix": ""
},
{
"first": "Adesina",
"middle": [],
"last": "Ayeni",
"suffix": ""
},
{
"first": "Mofe",
"middle": [],
"last": "Adeyemi",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of Machine Translation Summit XVIII: Research Track",
"volume": "",
"issue": "",
"pages": "61--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Adelani, Dana Ruiter, Jesujoba Alabi, Damilola Adebonojo, Adesina Ayeni, Mofe Adeyemi, Ayo- dele Esther Awokoya, and Cristina Espa\u00f1a-Bonet. 2021. The effect of domain and diacritics in Yoruba- English neural machine translation. In Proceed- ings of Machine Translation Summit XVIII: Research Track, pages 61-75, Virtual. Association for Machine Translation in the Americas.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Blessing Sibanda, Andiswa Bukula, and Sam Manthalu. 2022. A few thousand translations go a long way! Leveraging pre-trained models for African news translation",
"authors": [
{
"first": "Jesujoba",
"middle": [],
"last": "David Ifeoluwa Adelani",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Oluwadara Alabi",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Xiaoyu",
"middle": [],
"last": "Kreutzer",
"suffix": ""
},
{
"first": "Machel",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Reid",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Ruiter",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Klakow",
"suffix": ""
},
{
"first": "Ernie",
"middle": [],
"last": "Nabende",
"suffix": ""
},
{
"first": "Tajuddeen",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Freshia",
"middle": [],
"last": "Gwadabe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sackey",
"suffix": ""
},
{
"first": "F",
"middle": [
"P"
],
"last": "Bonaventure",
"suffix": ""
},
{
"first": "Chris",
"middle": [
"Chinenye"
],
"last": "Dossou",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Emezue",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Leong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Beukman",
"suffix": ""
},
{
"first": "Guyo",
"middle": [],
"last": "Shamsuddeen Hassan Muhammad",
"suffix": ""
},
{
"first": "Oreen",
"middle": [],
"last": "Dub Jarso",
"suffix": ""
},
{
"first": "Andre",
"middle": [
"Niyongabo"
],
"last": "Yousuf",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Rubungo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hacheme",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2022 Annual Conference of the North American Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Ifeoluwa Adelani, Jesujoba Oluwadara Al- abi, Angela Fan, Julia Kreutzer, Xiaoyu Shen, Machel Reid, Dana Ruiter, Dietrich Klakow, Peter Nabende, Ernie Chang, Tajuddeen Gwadabe, Freshia Sackey, Bonaventure F. P. Dossou, Chris Chinenye Emezue, Colin Leong, Michael Beukman, Shamsud- deen Hassan Muhammad, Guyo Dub Jarso, Oreen Yousuf, Andre Niyongabo Rubungo, Gilles Hacheme, Eric Peter Wairagala, Muhammad Umair Nasir, Ben- jamin Ayoade Ajibade, Tunde Oluwaseyi Ajayi, Yvonne Wambui Gitau, Jade Abbott, Mohamed Ahmed, Millicent Ochieng, Anuoluwapo Aremu, Perez Ogayo, Jonathan Mukiibi, Fatoumata Ouoba Kabore, Godson Koffi Kalipe, Derguene Mbaye, Al- lahsera Auguste Tapo, Victoire Memdjokam Koagne, Edwin Munkoh-Buabeng, Valencia Wagner, Idris Abdulmumin, Ayodele Awokoya, Happy Buzaaba, Blessing Sibanda, Andiswa Bukula, and Sam Man- thalu. 2022. A few thousand translations go a long way! Leveraging pre-trained models for African news translation. In Proceedings of the 2022 An- nual Conference of the North American Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "JW300: A widecoverage parallel corpus for low-resource languages",
"authors": [
{
"first": "Zeljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3204--3210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeljko Agi\u0107 and Ivan Vuli\u0107. 2019. JW300: A wide- coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 3204- 3210, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On the cross-lingual transferability of monolingual representations",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of mono- lingual representations. In Proceedings of the 58th",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "4623--4637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 4623-4637, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Language model prior for low-resource neural machine translation",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Baziotis",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7622--7634",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos Baziotis, Barry Haddow, and Alexandra Birch. 2020. Language model prior for low-resource neural machine translation. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 7622-7634, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages",
"authors": [
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Vitaly",
"middle": [],
"last": "Nikolaev",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "454--470",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typo- logically diverse languages. Transactions of the As- sociation for Computational Linguistics, 8:454-470.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Benchmarking neural and statistical machine translation on low-resource African languages",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "2667--2675",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Duh, Paul McNamee, Matt Post, and Brian Thompson. 2020. Benchmarking neural and statis- tical machine translation on low-resource African languages. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2667- 2675, Marseille, France. European Language Re- sources Association.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "MMTAfrica: Multilingual machine translation for African languages",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Chinenye Emezue",
"suffix": ""
},
{
"first": "F",
"middle": [
"P"
],
"last": "Bonaventure",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dossou",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Sixth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "398--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Chinenye Emezue and Bonaventure F. P. Dossou. 2021. MMTAfrica: Multilingual machine translation for African languages. In Proceedings of the Sixth Conference on Machine Translation, pages 398-411, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "FFR v1.1: Fon-French neural machine translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Chinenye Emezue",
"suffix": ""
},
{
"first": "Femi Pancrace Bonaventure",
"middle": [],
"last": "Dossou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the The Fourth Widening Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "83--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Chinenye Emezue and Femi Pancrace Bonaven- ture Dossou. 2020. FFR v1.1: Fon-French neural ma- chine translation. In Proceedings of the The Fourth Widening Natural Language Processing Workshop, pages 83-87, Seattle, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Chinedu Uchechukwu, and Mark Hepple. 2020. Igbo-English machine translation: An evaluation benchmark",
"authors": [
{
"first": "Ignatius",
"middle": [],
"last": "Ezeani",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Rayson",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ikechukwu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Onyenwe",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.00648"
]
},
"num": null,
"urls": [],
"raw_text": "Ignatius Ezeani, Paul Rayson, Ikechukwu E. Onyenwe, Chinedu Uchechukwu, and Mark Hepple. 2020. Igbo-English machine translation: An evaluation benchmark. arXiv:2004.00648.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Beyond English-Centric Multilingual Machine Translation",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Shruti",
"middle": [],
"last": "Bhosale",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Zhiyi",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "El-Kishky",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Mandeep",
"middle": [],
"last": "Baines",
"suffix": ""
},
{
"first": "Onur",
"middle": [],
"last": "Celebi",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Vitaliy",
"middle": [],
"last": "Liptchinsky",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
}
],
"year": 2021,
"venue": "Journal of Machine Learning Research",
"volume": "22",
"issue": "107",
"pages": "1--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Michael Auli, and Ar- mand Joulin. 2021. Beyond English-Centric Mul- tilingual Machine Translation. Journal of Machine Learning Research, 22(107):1-48.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Transfer learning and distant supervision for multilingual transformer models: A study on African languages",
"authors": [
{
"first": "Michael",
"middle": [
"A"
],
"last": "Hedderich",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Adelani",
"suffix": ""
},
{
"first": "Dawei",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Jesujoba",
"middle": [],
"last": "Alabi",
"suffix": ""
},
{
"first": "Udia",
"middle": [],
"last": "Markus",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2580--2591",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael A. Hedderich, David Adelani, Dawei Zhu, Je- sujoba Alabi, Udia Markus, and Dietrich Klakow. 2020. Transfer learning and distant supervision for multilingual transformer models: A study on African languages. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2580-2591, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Cross-lingual ability of multilingual BERT: An empirical study",
"authors": [
{
"first": "K",
"middle": [],
"last": "Karthikeyan",
"suffix": ""
},
{
"first": "Zihan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Mayhew",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilin- gual BERT: An empirical study. In International Conference on Learning Representations.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "WikiLingua: A new benchmark dataset for cross-lingual abstractive summarization",
"authors": [
{
"first": "Faisal",
"middle": [],
"last": "Ladhak",
"suffix": ""
},
{
"first": "Esin",
"middle": [],
"last": "Durmus",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "4034--4048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faisal Ladhak, Esin Durmus, Claire Cardie, and Kath- leen McKeown. 2020. WikiLingua: A new bench- mark dataset for cross-lingual abstractive summariza- tion. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 4034-4048, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Continual mixed-language pre-training for extremely low-resource neural machine translation",
"authors": [
{
"first": "Zihan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Genta Indra Winata",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
"volume": "",
"issue": "",
"pages": "2706--2718",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zihan Liu, Genta Indra Winata, and Pascale Fung. 2021. Continual mixed-language pre-training for extremely low-resource neural machine translation. In Find- ings of the Association for Computational Linguis- tics: ACL-IJCNLP 2021, pages 2706-2718, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A focus on neural machine translation for African languages",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Martinus",
"suffix": ""
},
{
"first": "Jade",
"middle": [
"Z"
],
"last": "Abbott",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.05685"
]
},
"num": null,
"urls": [],
"raw_text": "Laura Martinus and Jade Z. Abbott. 2019. A focus on neural machine translation for African languages. arXiv:1906.05685.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "On the importance of pre-training data volume for compact language models",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Micheli",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Martin D'hoffschmidt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fleuret",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7853--7858",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Micheli, Martin d'Hoffschmidt, and Fran\u00e7ois Fleuret. 2020. On the importance of pre-training data volume for compact language models. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7853-7858, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "AraT5: Text-totext transformers for Arabic language generation",
"authors": [],
"year": null,
"venue": "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "628--647",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "El Moatez Billah Nagoudi, AbdelRahim Elmadany, and Muhammad Abdul-Mageed. 2022. AraT5: Text-to- text transformers for Arabic language generation. In Proceedings of the 60th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 628-647, Dublin, Ireland. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Blessing Sibanda, Blessing Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp\u00d6ktem, Adewale Akinfaderin, and Abdallah Bashir. 2020. Participatory research for low-resourced machine translation: A case study in African languages",
"authors": [
{
"first": "Wilhelmina",
"middle": [],
"last": "Nekoto",
"suffix": ""
},
{
"first": "Vukosi",
"middle": [],
"last": "Marivate",
"suffix": ""
},
{
"first": "Tshinondiwa",
"middle": [],
"last": "Matsila",
"suffix": ""
},
{
"first": "Timi",
"middle": [],
"last": "Fasubaa",
"suffix": ""
},
{
"first": "Taiwo",
"middle": [],
"last": "Fagbohungbe",
"suffix": ""
},
{
"first": "Shamsuddeen",
"middle": [],
"last": "Solomon Oluwole Akinola",
"suffix": ""
},
{
"first": "Salomon",
"middle": [
"Kabongo"
],
"last": "Muhammad",
"suffix": ""
},
{
"first": "Salomey",
"middle": [],
"last": "Kabenamualu",
"suffix": ""
},
{
"first": "Freshia",
"middle": [],
"last": "Osei",
"suffix": ""
},
{
"first": "Rubungo",
"middle": [
"Andre"
],
"last": "Sackey",
"suffix": ""
},
{
"first": "Ricky",
"middle": [],
"last": "Niyongabo",
"suffix": ""
},
{
"first": "Perez",
"middle": [],
"last": "Macharm",
"suffix": ""
},
{
"first": "Orevaoghene",
"middle": [],
"last": "Ogayo",
"suffix": ""
},
{
"first": "Musie",
"middle": [],
"last": "Ahia",
"suffix": ""
},
{
"first": "Mofetoluwa",
"middle": [],
"last": "Meressa Berhe",
"suffix": ""
},
{
"first": "Masabata",
"middle": [],
"last": "Adeyemi",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Mokgesi-Selinga",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Okegbemi",
"suffix": ""
},
{
"first": "Kolawole",
"middle": [],
"last": "Martinus",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Tajudeen",
"suffix": ""
},
{
"first": "Kelechi",
"middle": [],
"last": "Degila",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Ogueji",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Siminyu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Kreutzer",
"suffix": ""
},
{
"first": "Jamiil Toure",
"middle": [],
"last": "Webster",
"suffix": ""
},
{
"first": "Jade",
"middle": [],
"last": "Ali",
"suffix": ""
},
{
"first": "Iroro",
"middle": [],
"last": "Abbott",
"suffix": ""
},
{
"first": "Ignatius",
"middle": [],
"last": "Orife",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ezeani",
"suffix": ""
},
{
"first": "Abdulkadir",
"middle": [],
"last": "Idris",
"suffix": ""
},
{
"first": "Herman",
"middle": [],
"last": "Dangana",
"suffix": ""
},
{
"first": "Hady",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Goodness",
"middle": [],
"last": "Elsahar",
"suffix": ""
},
{
"first": "Ghollah",
"middle": [],
"last": "Duru",
"suffix": ""
},
{
"first": "Murhabazi",
"middle": [],
"last": "Kioko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Espoir",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Elan Van Biljon",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Whitenack",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Onyefuluchi",
"suffix": ""
}
],
"year": null,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "2144--2160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsuddeen Muham- mad, Salomon Kabongo Kabenamualu, Salomey Osei, Freshia Sackey, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Orevaoghene Ahia, Musie Meressa Berhe, Mofetoluwa Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kathleen Siminyu, Julia Kreutzer, Jason Webster, Jamiil Toure Ali, Jade Abbott, Iroro Orife, Ignatius Ezeani, Idris Abdulkadir Dan- gana, Herman Kamper, Hady Elsahar, Goodness Duru, Ghollah Kioko, Murhabazi Espoir, Elan van Biljon, Daniel Whitenack, Christopher Onyefuluchi, Chris Chinenye Emezue, Bonaventure F. P. Dossou, Blessing Sibanda, Blessing Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp\u00d6ktem, Adewale Akin- faderin, and Abdallah Bashir. 2020. Participatory re- search for low-resourced machine translation: A case study in African languages. In Findings of the Asso- ciation for Computational Linguistics: EMNLP 2020, pages 2144-2160, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Small data? No problem! Exploring the viability of pretrained multilingual language models for lowresourced languages",
"authors": [
{
"first": "Kelechi",
"middle": [],
"last": "Ogueji",
"suffix": ""
},
{
"first": "Yuxin",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 1st Workshop on Multilingual Representation Learning",
"volume": "",
"issue": "",
"pages": "116--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelechi Ogueji, Yuxin Zhu, and Jimmy Lin. 2021. Small data? No problem! Exploring the viability of pretrained multilingual language models for low- resourced languages. In Proceedings of the 1st Work- shop on Multilingual Representation Learning, pages 116-126, Punta Cana, Dominican Republic. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "An exploration of vocabulary size and transfer effects in multilingual language models for African languages",
"authors": [
{
"first": "Akintunde",
"middle": [],
"last": "Oladipo",
"suffix": ""
},
{
"first": "Odunayo",
"middle": [],
"last": "Ogundepo",
"suffix": ""
},
{
"first": "Kelechi",
"middle": [],
"last": "Ogueji",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the 3rd Workshop on African Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akintunde Oladipo, Odunayo Ogundepo, Kelechi Ogueji, and Jimmy Lin. 2022. An exploration of vocabulary size and transfer effects in multilingual language models for African languages. In Proceed- ings of the 3rd Workshop on African Natural Lan- guage Processing.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "BLEU: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Compu- tational Linguistics, ACL '02, page 311-318, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "140",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Adafactor: Adaptive learning rates with sublinear memory cost",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "4596--4604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In Proceedings of the 35th International Conference on Machine Learning, pages 4596-4604.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Multilingual translation with extensible multilingual pretraining and finetuning",
"authors": [
{
"first": "Yuqing",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Chau",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peng-Jen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.00401"
]
},
"num": null,
"urls": [],
"raw_text": "Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. arXiv:2008.00401.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Parallel corpora for bi-directional statistical machine translation for seven Ethiopian language pairs",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Solomon Teferra Abate",
"suffix": ""
},
{
"first": "Martha",
"middle": [
"Yifiru"
],
"last": "Melese",
"suffix": ""
},
{
"first": "Million",
"middle": [],
"last": "Tachbelie",
"suffix": ""
},
{
"first": "Solomon",
"middle": [],
"last": "Meshesha",
"suffix": ""
},
{
"first": "Wondwossen",
"middle": [],
"last": "Atinafu",
"suffix": ""
},
{
"first": "Yaregal",
"middle": [],
"last": "Mulugeta",
"suffix": ""
},
{
"first": "Hafte",
"middle": [],
"last": "Assabie",
"suffix": ""
},
{
"first": "Binyam",
"middle": [],
"last": "Abera",
"suffix": ""
},
{
"first": "Tewodros",
"middle": [],
"last": "Ephrem",
"suffix": ""
},
{
"first": "Wondimagegnhue",
"middle": [],
"last": "Abebe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tsegaye",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Linguistic Resources for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "83--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Solomon Teferra Abate, Michael Melese, Martha Yi- firu Tachbelie, Million Meshesha, Solomon Ati- nafu, Wondwossen Mulugeta, Yaregal Assabie, Hafte Abera, Binyam Ephrem, Tewodros Abebe, Wondim- agegnhue Tsegaye, Amanuel Lemma, Tsegaye An- dargie, and Seifedin Shifaw. 2018. Parallel corpora for bi-directional statistical machine translation for seven Ethiopian language pairs. In Proceedings of the First Workshop on Linguistic Resources for Nat- ural Language Processing, pages 83-90, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transform- ers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "and Colin Raffel. 2021a. ByT5: Towards a tokenfree future with pre-trained byte-to-byte models",
"authors": [
{
"first": "Linting",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Barua",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2105.13626"
]
},
"num": null,
"urls": [],
"raw_text": "Linting Xue, Aditya Barua, Noah Constant, Rami Al- Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. 2021a. ByT5: Towards a token- free future with pre-trained byte-to-byte models. arXiv:2105.13626.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Aditya Barua, and Colin Raffel. 2021b. mT5: A massively multilingual pre-trained text-to-text transformer",
"authors": [
{
"first": "Linting",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "483--498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021b. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 483-498, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A robustly optimized BERT pre-training approach with post-training",
"authors": [
{
"first": "Liu",
"middle": [],
"last": "Zhuang",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Ya",
"suffix": ""
},
{
"first": "Zhao",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 20th Chinese National Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1218--1227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu Zhuang, Lin Wayne, Shi Ya, and Zhao Jun. 2021. A robustly optimized BERT pre-training approach with post-training. In Proceedings of the 20th Chinese National Conference on Computational Linguistics, pages 1218-1227, Huhhot, China. Chinese Informa- tion Processing Society of China.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Model Configurations: model configurations and training hyperparameters."
},
"TABREF3": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Dataset Information: Characteristics and the size of data in each language, including number of sentences and tokens, and uncompressed size on disk. The table also shows the written scripts and family that each language belongs to, along with its language code."
},
"TABREF5": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Model Comparisons: a high-level comparison of our model with similar large multilingual pretrained language models featuring low-resource African languages."
},
"TABREF7": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Model</td><td colspan=\"2\"># params hau</td><td colspan=\"4\">translation from English ibo pcm swa yor</td><td>lug</td><td>luo</td><td>avg</td></tr><tr><td>mT5 (Xue et al., 2021b)</td><td>582M</td><td>2.4</td><td colspan=\"4\">14.1 33.5 23.2 2.2</td><td>3.5</td><td>3.2</td><td>11.7</td></tr><tr><td/><td/><td>\u2713</td><td>\u2713</td><td>\u2717</td><td>\u2713</td><td>\u2713</td><td>\u2717</td><td>\u2717</td></tr><tr><td>ByT5 (Xue et al., 2021a)</td><td>582M</td><td>8.8</td><td colspan=\"5\">18.6 32.4 26.6 6.2 11.3</td><td>8.8</td><td>16.1</td></tr><tr><td/><td/><td>\u2713</td><td>\u2713</td><td>\u2717</td><td>\u2713</td><td>\u2713</td><td>\u2717</td><td>\u2717</td></tr><tr><td>AfriMT5 (Adelani et al., 2022) AfriByT5 (Adelani et al., 2022)</td><td>582M 582M</td><td>4.5 9.8 \u2713</td><td colspan=\"5\">15.4 34.5 26.7 4.7 19.3 32.5 27.5 7.1 12.2 5.9 \u2713 \u2717 \u2713 \u2713 \u2717</td><td>4.5 9.0 \u2717</td><td>13.7 16.8</td></tr><tr><td>AfriTeVa Small AfriTeVa Base AfriTeVa Large AfriTeVa Base + En</td><td colspan=\"6\">64M 229M 745M 229M 10.1 17.3 28.7 24.3 6.8 4.3 8.1 30.3 16.1 2.9 7.2 13.2 31.7 20.3 4.9 8.9 15.7 31.5 20.6 6.0 \u2713 \u2713 \u2713 \u2713 \u2713</td><td>2.6 5.3 6.2 8.7 \u2717</td><td>4.1 6.6 6.8 8.6 \u2717</td><td>9.8 12.7 13.7 14.9</td></tr><tr><td>M2M-100 (Fan et al., 2021)</td><td colspan=\"7\">418M 14.4 20.3 33.2 27.0 9.6 13.0 10.8 18.3</td></tr><tr><td/><td/><td>\u2713</td><td>\u2713</td><td>\u2717</td><td>\u2713</td><td>\u2713</td><td>\u2713</td><td>\u2713</td></tr><tr><td>mBART50 (Tang et al., 2020)</td><td colspan=\"6\">610M 11.8 14.8 33.9 22.1 7.5</td><td>9.7</td><td>9.6</td><td>15.6</td></tr><tr><td/><td/><td>\u2717</td><td>\u2717</td><td>\u2717</td><td>\u2717</td><td>\u2717</td><td>\u2717</td><td>\u2717</td></tr></table>",
"text": "). The highest reported BLEU scores are shown in bold for T5 models; overall best BLEU scores are underlined."
}
}
}
}