ACL-OCL / Base_JSON /prefixN /json /nodalida /2021.nodalida-main.39.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:26.279975Z"
},
"title": "It's Basically the Same Language Anyway: the Case for a Nordic Language Model",
"authors": [
{
"first": "Magnus",
"middle": [],
"last": "Sahlgren",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Fredrik",
"middle": [],
"last": "Olsson",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Fredrik",
"middle": [],
"last": "Carlsson",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Love",
"middle": [],
"last": "B\u00f6rjeson",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "K",
"middle": [
"B"
],
"last": "Sweden",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "When is it beneficial for a research community to organize a broader collaborative effort on a topic, and when should we instead promote individual efforts? In this opinion piece, we argue that we are at a stage in the development of large-scale language models where a collaborative effort is desirable, despite the fact that the preconditions for making individual contributions have never been better. We consider a number of arguments for collaboratively developing a large-scale Nordic language model, include environmental considerations, cost, data availability, language typology, cultural similarity, and transparency. Our primary goal is to raise awareness and foster a discussion about our potential impact and responsibility as NLP community.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "When is it beneficial for a research community to organize a broader collaborative effort on a topic, and when should we instead promote individual efforts? In this opinion piece, we argue that we are at a stage in the development of large-scale language models where a collaborative effort is desirable, despite the fact that the preconditions for making individual contributions have never been better. We consider a number of arguments for collaboratively developing a large-scale Nordic language model, include environmental considerations, cost, data availability, language typology, cultural similarity, and transparency. Our primary goal is to raise awareness and foster a discussion about our potential impact and responsibility as NLP community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Deep Transformer language models have become the weapon of choice in modern NLP (and in AI more generally). There is a rich, and evergrowing, flora of models available, including BERT (Devlin et al., 2019) , XLNet (Yang et al., 2019) , Electra (Clark et al., 2020) , T5 (Raffel et al., 2020) , and GPT (2 and 3) (Radford et al., 2019; Brown et al., 2020) . These models present slight variations of architectural choices, training objectives, parameter settings, and size and composition of the training data. Despite some internal variation in performance, Transformer language models in general hold state of the art results in basically all NLP benchmarks and evaluation frameworks at the moment (Wang et al., 2018 Nie et al., 2020) .",
"cite_spans": [
{
"start": 184,
"end": 205,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 214,
"end": 233,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 244,
"end": 264,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 270,
"end": 291,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 312,
"end": 334,
"text": "(Radford et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 335,
"end": 354,
"text": "Brown et al., 2020)",
"ref_id": null
},
{
"start": 699,
"end": 717,
"text": "(Wang et al., 2018",
"ref_id": "BIBREF26"
},
{
"start": 718,
"end": 735,
"text": "Nie et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The downside to this recent development is the computational cost of training deep Transformer models. Starting with BERT-Base with its (now viewed as modest, but at the time of publication seen as substantial) 110 million parameters, and BERT-Large with its 340 million parameters, there has been a virtual explosion in the number of parameters, culminating in the recent GPT-3 with its 175 billion parameters (Brown et al., 2020) , GShard with 600 billion parameters (Lepikhin et al., 2021) , and the most recent Switch Transformer with a whopping 1,3 trillion parameters (Fedus et al., 2021) . This development translates into an acute need for access to powerful processing platforms, huge amounts of training data, and an ability to harbor extremely long training times. Taken together, this is a perfect recipe for extreme energy consumption and cost, which risks leading to reduced inclusivity in research on largescale language models.",
"cite_spans": [
{
"start": 411,
"end": 431,
"text": "(Brown et al., 2020)",
"ref_id": null
},
{
"start": 469,
"end": 492,
"text": "(Lepikhin et al., 2021)",
"ref_id": null
},
{
"start": 574,
"end": 594,
"text": "(Fedus et al., 2021)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is a budding debate on the environmental and cultural impact of training and using largescale language models. Two recent examples are Strubell et al. (2019) and Bender et al. (2021) ; the former analyze the energy consumption and cost of training deep Transformer language models, and the latter voice concerns regarding both the environmental and cultural impact of training and using large-scale language models. We hope to contribute to this discussion by providing a Nordic perspective on the need for large-scale language models. We will assume the position that a collaborative effort towards training a largescale Nordic language model is something worth striving for. We consider a number of arguments for this position, include environmental considerations, cost, data availability, language typology, cultural similarity, and transparency.",
"cite_spans": [
{
"start": 141,
"end": 163,
"text": "Strubell et al. (2019)",
"ref_id": "BIBREF24"
},
{
"start": 168,
"end": 188,
"text": "Bender et al. (2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "to a flight between San Fransisco and New York, or the average emissions resulting from electricity and heating for one person for one year in Stockholm. 1 This is something of a best-case scenario; the authors also calculate that training a BERT-Large with neural architecture search emits something like 284 tonnes of CO 2 , which is roughly equivalent to the emissions of 56 average persons, throughout a year. An interesting question thus becomes: how much CO 2 emission has been produced as a result of the current development in NLP?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is of course impossible to get an accurate count on this, but one way to approximate an answer might be to consider how many models have been trained in the world so far. We obviously cannot know this either, but we might be able to get an idea by looking at the number of models published in open source libraries. Luckily, much of the recent development is centered around one such library: the Transformers library of the company Hugging Face. 2 The Transformers library contains (at the time of submission) more than 6,800 models covering a total of 250 languages. A survey carried out by Benaich and Hogarth in the fall of 2020 claims that more than 1,000 companies are using the Transformers library in production, and that it has been installed more than 5 million times. 3 6,800 models times a low estimate of 652 kg of CO 2 sums to 4,434 tonnes of CO 2 emissions. This is of course an extremely unreliable estimate. Many of the models uploaded to the Transformers library are merely finetuned and not trained from scratch (we have not been able to quantify this proportion). On the other hand, many of the uploaded models are significantly larger than BERT-Base, and one can assume that only a fraction of models that are built are actually uploaded to the Transformers model repository. By comparison, the average Swedish citizen emits around 8 tonnes of CO 2 per year, 4 while RISE (the Research Institutes of Sweden) with approximately 2,800 employees emitted a total of 1,287 tonnes CO 2 during 2019 according to the 2019 annual report.",
"cite_spans": [
{
"start": 450,
"end": 451,
"text": "2",
"ref_id": null
},
{
"start": 782,
"end": 783,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Counting only the Nordic models uploaded to Hugging Face, there are (at the time of submis-1 www.regionfakta.com 2 https://github.com/huggingface/ transformers 3 https://www.stateof.ai/ (slide 127) 4 www.naturvardsverket.se sion) a total of 354 models for the Nordic languages (see Table 1 ). Based on the assumptions in Strubell et al. (2019) , this amounts to more than 230 tonnes of CO 2 . By comparison, Anthony et al. (2020) estimates (using slightly different assumptions than Strubell et al. (2019) ) that training GPT-3 resulted in at least 85 tonnes of CO 2 emission. Although these estimates are not directly comparable, they indicate that a focused effort to produce a large-scale Nordic language model may lead to a smaller carbon footprint than the current development where we see a steady increase in the number of monolingual models.",
"cite_spans": [
{
"start": 321,
"end": 343,
"text": "Strubell et al. (2019)",
"ref_id": "BIBREF24"
},
{
"start": 483,
"end": 505,
"text": "Strubell et al. (2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 282,
"end": 289,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is anything but cheap to train large-scale language models. The cost for performing a single training pass for the largest T5 model is estimated to be $1, 3 million (Sharir et al., 2020) , while training GPT-3 is estimated at around $4, 6 million. 5 To put these numbers into perspective, the average project funding in the EU Horizon 2020 program is estimated to be around $2, 1 million, 6 while the average national research project is typically not more than around $150 thousand. 7 This means that, unless you happen to be in the possession of a sizeable computing infrastructure, training models on this scale will be out of the question for most researchers. However, even with access to suitable GPUs, it is not obvious that it will be possible to train a model on the required scale. Li (2020) estimates 5 lambdalabs.com/blog/demystifying-gpt-3/ 6 accelopment.com/blog/lessons-learntfrom-horizon-2020-for-its-final-2years/ 7 vr.se/soka-finansiering/beslut/2020-09-08-humaniora-och-samhallsvetenskap. html that performing a single training run with the full GPT-3 using an NVIDIA Tesla V100 GPU at its theoretical max speed would require 355 years. Assuming access to an NVIDIA DGX-1, which features 8 V100 GPUs, we would still need 44 years to build a replica of GPT-3. The cost of buying a DGX-1 machine is around $129 thousand -i.e. roughly the size of an average national research project.",
"cite_spans": [
{
"start": 168,
"end": 189,
"text": "(Sharir et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 251,
"end": 252,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 2: Cost",
"sec_num": "3"
},
{
"text": "The sizeable cost (monetary as well as temporal) required to build a large-scale language model effectively excludes a large proportion of the NLP community from training models. This may not be entirely negative, considering the environmental concerns raised in the previous section, but it would be desirable if the production of large-scale language models was more inclusive and collaborative, with transparency and the possibility to influence the procedure even by smaller research groups. A communal effort would not only enable more researchers to have an influence on the model design, but it may also lead to broader usage of the resulting model, thereby reducing the need to constantly build new small (and probably not very useful) models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 2: Cost",
"sec_num": "3"
},
{
"text": "It is a known fact that bigger training data leads to improved performance when using statistical learning methods in NLP (Banko and Brill, 2001; Sahlgren and Lenci, 2016) . This has been eminently well demonstrated in the context of language models by the recent improvements using models that have been trained on very large data samples (Raffel et al., 2020; Brown et al., 2020) . It is a fascinating question whether there at all exists sufficiently large text data to build native models for all Nordic languages.",
"cite_spans": [
{
"start": 122,
"end": 145,
"text": "(Banko and Brill, 2001;",
"ref_id": "BIBREF3"
},
{
"start": 146,
"end": 171,
"text": "Sahlgren and Lenci, 2016)",
"ref_id": "BIBREF21"
},
{
"start": 340,
"end": 361,
"text": "(Raffel et al., 2020;",
"ref_id": "BIBREF20"
},
{
"start": 362,
"end": 381,
"text": "Brown et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 3: Data Size and Transfer",
"sec_num": "4"
},
{
"text": "Considering the biggest Nordic language Swedish as an example, Sweden has legal deposit laws installed in 1661 for everything printed. During the the twentieth century it was gradually extended to include sound, moving images and computer games and electronic material. The law for legal deposit of electronic material was added in 2012. As a result, the National Library of Sweden (KB), has vast and ever growing collections, closing in on 26 Petabyte of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 3: Data Size and Transfer",
"sec_num": "4"
},
{
"text": "Though only a fraction of the collections are digitized, the digital collections are nonetheless substantial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 3: Data Size and Transfer",
"sec_num": "4"
},
{
"text": "KB, through its data lab (KBLab), works continuously to assembly corpora of Swedish texts and to make them available for modeling. The latest corpus of cleaned, edited, raw Swedish text is just over 104 GB of size (corresponding to approximately 1,4 billion sentences and 18,2 billion words). The sources for this corpus are: Swedish Wikipedia 2 GB; Governmental texts 5 GB; Electronic publications 0,4 GB; Social media 5GB; Monographs 2GB, and; Newspapers 90 GB. The corpus currently under construction increases primarily the share of born digital text from legal electronic deposits and is expected to be around 1 TB of cleaned, edited, raw Swedish text (thus approximately 14 billion sentences and 182 billion words). The upper limit (in terms of size) for subsequent corpora is expected to be between 2\u22125 TB, depending on the possibilities to transcribe spoken Swedish present in the KB collections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 3: Data Size and Transfer",
"sec_num": "4"
},
{
"text": "The situations in the other Nordic countries are similar, relative to the size of the population in the respective countries. There are consequently extensive Danish and Norwegian collections available, whereas the text/data resources in Iceland and Faroe Islands are expected to be substantially smaller. Combining all Nordic text resources would likely lead to a fairly substantial data source, likely on the order of Terabytes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 3: Data Size and Transfer",
"sec_num": "4"
},
{
"text": "The data conditions for the larger Nordic languages look promising even when considered individually, but it is not obvious that there even exists enough data to train native large-scale models in the smaller Nordic languages. Fortunately, it has been demonstrated that multilingual models improve the performance for languages with less available training data, due to transfer effects (Conneau et al., 2020) . In particular, the transfer effects seems to be specifically beneficial for typologically similar languages (Karthikeyan et al., 2020; Lauscher et al., 2020) . It is thus likely that in particular Icelandic and Faroese would benefit from a joint Nordic language model.",
"cite_spans": [
{
"start": 387,
"end": 409,
"text": "(Conneau et al., 2020)",
"ref_id": null
},
{
"start": 520,
"end": 546,
"text": "(Karthikeyan et al., 2020;",
"ref_id": "BIBREF13"
},
{
"start": 547,
"end": 569,
"text": "Lauscher et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 3: Data Size and Transfer",
"sec_num": "4"
},
{
"text": "The Nordic languages belong to one of three Germanic language groups, also referred to as North Germanic languages (in addition to West and now extinct East Germanic). The North Germanic language group is further divided into two branches: East Scandinavian languages, which includes Swedish and Danish, and West Scandina-vian languages, which contains Norwegian, Icelandic and Faroese. This genealogical categorization is sometimes contrasted with a distinction based on mutual intelligibility, which separates Continental Scandinavian (Swedish, Norwegian and Danish) from Insular Scandinavian (Icelandic and Faroese).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 4: Typology",
"sec_num": "5"
},
{
"text": "The Nordic languages are so similar from a typological perspective that the language boundaries have been, if not in dispute, at least subject to some discussion (Stampe Sletten et al., 2005) . The difference between dialects within the Nordic languages is in some cases probably larger than the difference between the languages. A telling example is the difference between Norwegian Bokm\u00e5l, which is very similar to Danish and as such is categorized as an East Scandinavian language, and Nynorsk, which is categorized as a West Scandinavian language. Another example is the difference between Jamtlandish (or Jamska, a dialect spoken in the Swedish region J\u00e4mtland, which is categorized as a West Scandinavian language) and standard Swedish (which is East Scandinavian).",
"cite_spans": [
{
"start": 162,
"end": 191,
"text": "(Stampe Sletten et al., 2005)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 4: Typology",
"sec_num": "5"
},
{
"text": "From a typological perspective, it thus makes sense to entertain the idea of a joint North Germanic language model, in particular when considering the potential for transfer effects to the smaller Nordic languages. Of course, one can always ask whether we should not aim for a combined Germanic model instead? There will probably be something like an order of magnitude more data available if we consider all Germanic languages rather than just the Nordic ones. However, one can expect diminishing returns by adding more data at some point, and it is an interesting (and, as far as we are aware, open) question what is the trade-off between language similarity and data size?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 4: Typology",
"sec_num": "5"
},
{
"text": "6 Argument 5: Culture Bender et al. (2021) raise concerns about the considerable anglocentrism of current language models. We agree that this is potentially problematic; most current models are trained on data harvested from the Internet, which we know is produced by certain demographies, and as such is not representative of the general population. 8 A consequence of this is that current language models only encode the perspectives of certain groups of people, and these people tend to not belong to marginalized groups. It is well-known that language models encode biases and prejudice that may be problematic May et al., 2019) .",
"cite_spans": [
{
"start": 22,
"end": 42,
"text": "Bender et al. (2021)",
"ref_id": "BIBREF4"
},
{
"start": 615,
"end": 632,
"text": "May et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 4: Typology",
"sec_num": "5"
},
{
"text": "Anglocentrism is not necessarily a disqualifying factor for the Nordic countries, some of which (such as Sweden) is sometimes considered to be among the most Americanized countries in the world (\u00c5sard, 2016; Alm, 2003) . We generally listen to the same type of music, watch the same type of movies, and watch the same type of TVshows. We don't, however, have similar political systems (as demonstrated by recent events). By contrast, there is arguably no (significant) difference in culture, politics, or economics between the Nordic countries. In fact, there are probably more cultural differences within the countries than between.",
"cite_spans": [
{
"start": 194,
"end": 207,
"text": "(\u00c5sard, 2016;",
"ref_id": null
},
{
"start": 208,
"end": 218,
"text": "Alm, 2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 4: Typology",
"sec_num": "5"
},
{
"text": "A relevant question is how to also include minority languages from other language families, such as S\u00e1mi. A natural suggestion for this specific case is to consider a Uralic language model, which would include languages such as Finnish, Hungarian, Estonian, as well as the smaller languages Erzya, Moksha, Mari, Udmurt, S\u00e1mi, and Komi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 4: Typology",
"sec_num": "5"
},
{
"text": "The largest concurrent language models are not publicly available. Few have probably missed the controversy surrounding the initial decision of Open AI to not release GPT-2 due to concerns of adversarial usages. 9 As we know, GPT-2 was eventually released in full, and there are now GPT-2 models available in many other languages. The original GPT-3 model is however not yet openly available (Open AI is beginning to look like a misnomer), but there are several open-source efforts to provide competing, or at least alternative, models. 10, 11 This lack of transparency obviously limits the ability for other researchers not only to investigate this type of model, but also to contribute to its future development. A collaborative Nordic effort would ensure inclusivity in the development, as well as accessibility to the final model.",
"cite_spans": [
{
"start": 537,
"end": 540,
"text": "10,",
"ref_id": null
},
{
"start": 541,
"end": 543,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument 6: Transparency",
"sec_num": "7"
},
{
"text": "Based on the considerations raised in this paper, we argue that we -the Nordic NLP community should work together to build a truly largescale Nordic language model, for the Nordic languages, by Nordic researchers. We believe that such a resource will be extremely beneficial for Nordic NLP, and that it will have the potential to reduce the environmental impact of continuously training new models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "https://www.pewresearch.org/internet/ fact-sheet/social-media/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "openai.com/blog/better-languagemodels/ 10 github.com/EleutherAI/gpt-neo 11 github.com/sberbank-ai/ru-gpts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "America and the future of sweden: Americanization as controlled modernization",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Alm",
"suffix": ""
}
],
"year": 2003,
"venue": "American Studies in Scandinavia",
"volume": "35",
"issue": "2",
"pages": "64--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Alm. 2003. America and the future of swe- den: Americanization as controlled modernization. American Studies in Scandinavia, 35(2):64-72.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Carbontracker: Tracking and predicting the carbon footprint of training deep learning models",
"authors": [
{
"first": "F",
"middle": [
"Wolff"
],
"last": "Lasse",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Anthony",
"suffix": ""
},
{
"first": "Raghavendra",
"middle": [],
"last": "Kanding",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Selvan",
"suffix": ""
}
],
"year": 2020,
"venue": "ICML Workshop on Challenges in Deploying and monitoring Machine Learning Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lasse F. Wolff Anthony, Benjamin Kanding, and Raghavendra Selvan. 2020. Carbontracker: Track- ing and predicting the carbon footprint of training deep learning models. In ICML Workshop on Chal- lenges in Deploying and monitoring Machine Learn- ing Systems.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Det bl\u00e5gula stj\u00e4rnbaneret: USA:s n\u00e4rvaro och inflytande i Sverige. Carlssons",
"authors": [
{
"first": "E",
"middle": [],
"last": "\u00c5sard",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E.\u00c5sard. 2016. Det bl\u00e5gula stj\u00e4rnbaneret: USA:s n\u00e4rvaro och inflytande i Sverige. Carlssons.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Scaling to very very large corpora for natural language disambiguation",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, ACL '01",
"volume": "",
"issue": "",
"pages": "26--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language disambigua- tion. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, ACL '01, page 26-33, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On the dangers of stochastic parrots: Can language models be too big?",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Begru",
"suffix": ""
},
{
"first": "Angelina",
"middle": [],
"last": "Mcmillan-Major",
"suffix": ""
},
{
"first": "Shmargaret",
"middle": [],
"last": "Shmitchell",
"suffix": ""
}
],
"year": 2021,
"venue": "FAccT '21",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Bender, Timnit Begru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In FAccT '21.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Identifying and reducing gender bias in word-level language models",
"authors": [
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "7--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikha Bordia and Samuel R. Bowman. 2019. Iden- tifying and reducing gender bias in word-level lan- guage models. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics: Student Re- search Workshop, pages 7-15, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Language models are few-shot learners",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Language models are few-shot learners.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Electra: Pretraining text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre- training text encoders as discriminators rather than generators. In International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [],
"year": null,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguis- tics, pages 8440-8451, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity",
"authors": [
{
"first": "William",
"middle": [],
"last": "Fedus",
"suffix": ""
},
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Fedus, Barret Zoph, and Noam Shazeer. 2021. http://arxiv.org/abs/2101.03961 Switch trans- formers: Scaling to trillion parameter models with simple and efficient sparsity.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Cross-lingual ability of multilingual bert: An empirical study",
"authors": [
{
"first": "K",
"middle": [],
"last": "Karthikeyan",
"suffix": ""
},
{
"first": "Zihan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Mayhew",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K Karthikeyan, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilin- gual bert: An empirical study. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Lauscher",
"suffix": ""
},
{
"first": "Vinit",
"middle": [],
"last": "Ravishankar",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4483--4499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Lauscher, Vinit Ravishankar, Ivan Vuli\u0107, and Goran Glava\u0161. 2020. From zero to hero: On the limitations of zero-shot language transfer with mul- tilingual Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 4483-4499, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "2021. {GS}hard: Scaling giant models with conditional computation and automatic sharding",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Lepikhin",
"suffix": ""
},
{
"first": "Hyoukjoong",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Yuanzhong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Dehao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Yanping",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": null,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2021. {GS}hard: Scaling giant models with conditional computation and automatic sharding. In Interna- tional Conference on Learning Representations.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Openai's gpt-3 language model: A technical overview",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "2021--2023",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan Li. 2020. Openai's gpt-3 language model: A technical overview. https://lambdalabs. com/blog/demystifying-gpt-3/. Ac- cessed: 2021-02-05.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "On measuring social biases in sentence encoders",
"authors": [
{
"first": "Chandler",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "622--628",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measur- ing social biases in sentence encoders. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628, Min- neapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adversarial NLI: A new benchmark for natural language understanding",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4885--4901",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Ad- versarial NLI: A new benchmark for natural lan- guage understanding. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 4885-4901, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Techni- cal report, Open AI.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "140",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. Journal of Machine Learning Research, 21(140):1-67.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The effects of data size and frequency range on distributional semantic models",
"authors": [
{
"first": "Magnus",
"middle": [],
"last": "Sahlgren",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "975--980",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Magnus Sahlgren and Alessandro Lenci. 2016. The ef- fects of data size and frequency range on distribu- tional semantic models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 975-980, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The cost of training nlp models: A concise overview",
"authors": [
{
"first": "Or",
"middle": [],
"last": "Sharir",
"suffix": ""
},
{
"first": "Barak",
"middle": [],
"last": "Peleg",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Shoham",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.08900"
]
},
"num": null,
"urls": [],
"raw_text": "Or Sharir, Barak Peleg, and Yoav Shoham. 2020. http://arxiv.org/abs/arXiv:2004.08900 The cost of training nlp models: A concise overview.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Nordens spr\u00e5k -med r\u00f6tter och f\u00f6tter",
"authors": [
{
"first": "Arne",
"middle": [],
"last": "Iben Stampe Sletten",
"suffix": ""
},
{
"first": "Kaisa",
"middle": [],
"last": "Torp",
"suffix": ""
},
{
"first": "Mikael",
"middle": [],
"last": "H\u00e4kkinen",
"suffix": ""
},
{
"first": "Carl",
"middle": [
"Christian"
],
"last": "Svonni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Olsen",
"suffix": ""
}
],
"year": 2005,
"venue": "Number 2004:008 in Nord. Nordisk ministerr\u00e5d",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iben Stampe Sletten, Arne Torp, Kaisa H\u00e4kkinen, Mikael Svonni, and Carl Christian Olsen. 2005. Nordens spr\u00e5k -med r\u00f6tter och f\u00f6tter. Number 2004:008 in Nord. Nordisk ministerr\u00e5d.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Energy and policy considerations for deep learning in NLP",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Ananya",
"middle": [],
"last": "Ganesh",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3645--3650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Superglue: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3266--3280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural In- formation Processing Systems, pages 3266-3280. Curran Associates, Inc.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, volume 32, pages 5753-5763. Curran Associates, Inc.",
"links": null
}
},
"ref_entries": {}
}
}