ACL-OCL / Base_JSON /prefixS /json /sltu /2020.sltu-1.49.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:13.515497Z"
},
"title": "\"A Passage to India\": Pre-trained Word Embeddings for Indian Languages",
"authors": [
{
"first": "Kumar",
"middle": [],
"last": "Saurav",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Bombay",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Kumar",
"middle": [],
"last": "Saunack",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Bombay",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Diptesh",
"middle": [],
"last": "Kanojia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Bombay",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Bombay",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Sanskrit",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Dense word vectors or 'word embeddings' which encode semantic properties of words, have now become integral to NLP tasks like Machine Translation (MT), Question Answering (QA), Word Sense Disambiguation (WSD), and Information Retrieval (IR). In this paper, we use various existing approaches to create multiple word embeddings for 14 Indian languages. We place these embeddings for all these languages, viz.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Dense word vectors or 'word embeddings' which encode semantic properties of words, have now become integral to NLP tasks like Machine Translation (MT), Question Answering (QA), Word Sense Disambiguation (WSD), and Information Retrieval (IR). In this paper, we use various existing approaches to create multiple word embeddings for 14 Indian languages. We place these embeddings for all these languages, viz.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "India has a total of 22 scheduled languages with a combined total of more than a billion speakers. Indian language content on the web is accessed by approximately 234 million speakers across the world 1 . Despite the enormous user base, Indian languages are known to be low-resource or resource-constrained languages for NLP. Word embeddings have proven to be important resources, as they provide a dense set of features for downstream NLP tasks like MT, QA, IR, WSD, etc. Unlike in classical Machine Learning wherein features have at times to be extracted in a supervised manner, embeddings can be obtained in a completely unsupervised fashion. For Indian languages, there are little corpora and few datasets of appreciable size available for computational tasks. The wikimedia dumps which are used for generating pre-trained models are insufficient. Without sufficient data, it becomes difficult to train embeddings. NLP tasks that benefit from these pre-trained embeddings are very diverse. Tasks ranging from word analogy and spelling correction to more complex ones like Question Answering (Bordes et al., 2014) , Machine Translation (Artetxe et al., 2019) , and Information Retrieval (Diaz et al., 2016) have reported improvements with the use of well-trained embeddings models. The recent trend of transformer architecture based neural networks has inspired various language models that help train contextualized embeddings (Devlin et al., 2018; Peters et al., 2018; Melamud et al., 2016; Lample and Conneau, 2019) . They report significant improvements over various NLP tasks and release pre-trained embeddings models for many languages. One of the shortcomings of the currently available pre-trained models is the corpora size used for their training. Almost all of these models use Wikimedia corpus to train models which is insufficient 1 Source Link for Indian languages as Wikipedia itself lacks significant number of articles or text in these languages. Although there is no cap or minimum number of documents/lines which define a usable size of a corpus for training such models, it is generally considered that the more input training data, the better the embedding models. Acquiring raw corpora to be used as input training data has been a perennial problem for NLP researchers who work with low resource languages. Given a raw corpus, monolingual word embeddings can be trained for a given language. Additionally, NLP tasks that rely on utilizing common linguistic properties of more than one language need cross-lingual word embeddings, i.e., embeddings for multiple languages projected into a common vector space. These cross-lingual word embeddings have shown to help the task of cross-lingual information extraction (Levy et al., 2017) , False Friends and Cognate detection (Merlo and Rodriguez, 2019) , and Unsupervised Neural Machine Translation (Artetxe et al., 2018b) . With the recent advent of contextualized embeddings, a significant increase has been observed in the types of word embedding models. It would be convenient if a single repository existed for all such embedding models, especially for low-resource languages. Our work creates such a repository for fourteen Indian languages, keeping this in mind, by training and deploying 436 models with different training algorithms (like word2vec, BERT, etc.) and hyperparameters as detailed further in the paper. Our key contributions are:",
"cite_spans": [
{
"start": 1095,
"end": 1116,
"text": "(Bordes et al., 2014)",
"ref_id": "BIBREF7"
},
{
"start": 1139,
"end": 1161,
"text": "(Artetxe et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 1190,
"end": 1209,
"text": "(Diaz et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 1431,
"end": 1452,
"text": "(Devlin et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 1453,
"end": 1473,
"text": "Peters et al., 2018;",
"ref_id": "BIBREF26"
},
{
"start": 1474,
"end": 1495,
"text": "Melamud et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 1496,
"end": 1521,
"text": "Lample and Conneau, 2019)",
"ref_id": "BIBREF15"
},
{
"start": 2737,
"end": 2756,
"text": "(Levy et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 2795,
"end": 2822,
"text": "(Merlo and Rodriguez, 2019)",
"ref_id": "BIBREF19"
},
{
"start": 2869,
"end": 2892,
"text": "(Artetxe et al., 2018b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "(1) We acquire raw monolingual corpora for fourteen languages, including Wikimedia dumps. (2) We train various embedding models and evaluate them. 3We release these embedding models and evaluation data in a single repository 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The roadmap of the paper is as follows: in section 2, we discuss previous work; section 3 discusses the corpora and our evaluation datasets; section 4 briefs on the approaches used for training our models, section 5 discusses the resultant models and their evaluation; section 6 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Word embeddings were first introduced in (Y. Bengio, 2003) when it was realised that learning the joint probability of sequences was not feasible due to the 'curse of dimensionality', i.e., at that time, the value added by an additional dimension seemed much smaller than the overhead it added in terms of computational time, and space. Since then, several developments have occurred in this field. Word2Vec (Mikolov et al., 2013a) showed the way to train word vectors. The models introduced by them established new stateof-the-art on tasks such as Word Sense Disambiguation (WSD). GloVE (Pennington et al., 2014) and FastText (Bojanowski et al., 2017) further improved on results shown by Mikolov et al. (2013a) , where GloVE used a co-occurrence matrix and FastText utilized the sub-word information to generate word vectors. Sent2Vec (Pagliardini et al., 2017) generates sentence vectors inspired by the same idea. Universal Sentence Embeddings (Cer et al., 2018) , on the other hand, creates sentence vectors using two variants: transformers and DANs. Doc2Vec (Le and Mikolov, 2014) computes a feature vector for every document in the corpus. Similarly, Context2vec (Melamud et al., 2016) learns embedding for variable length sentential context for target words. The drawback of earlier models was that the representation for each word was fixed regardless of the context in which it appeared. To alleviate this problem, contextual word embedding models were created. ELMo (Peters et al., 2018) used bidirectional LSTMs to improve on the previous works. Later, BERT (Devlin et al., 2018) used the transformer architecture to establish new a state-of-the-art across different tasks. It was able to learn deep bidirectional context instead of just two unidirectional contexts, which helped it outperform previous models. XLNet (Yang et al., 2019 ) was a further improvement over BERT. It addressed the issues in BERT by introducing permutation language modelling, which allowed it to surpass BERT on several tasks. Cross-lingual word embeddings, in contrast with monolingual word embeddings, learn a common projection between two monolingual vector spaces. MUSE (Conneau et al., 2017) was introduced to get cross-lingual embeddings across different languages. VecMap (Artetxe et al., 2018a) introduced unsupervised learning for these embeddings. BERT, which is generally used for monolingual embeddings, can also be trained in a multilingual fashion. XLM (Lample and Conneau, 2019) was introduced as an improvement over BERT in the cross-lingual setting. The official repository for FastText has several pretrained word embedding for multiple languages, including some Indian languages. The French, Hindi and Polish word embeddings, in particular, have been evaluated on Word Analogy datasets, which were released along with the paper.",
"cite_spans": [
{
"start": 45,
"end": 58,
"text": "Bengio, 2003)",
"ref_id": "BIBREF28"
},
{
"start": 408,
"end": 431,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF20"
},
{
"start": 588,
"end": 613,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 690,
"end": 712,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF20"
},
{
"start": 837,
"end": 863,
"text": "(Pagliardini et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 948,
"end": 966,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 1064,
"end": 1086,
"text": "(Le and Mikolov, 2014)",
"ref_id": "BIBREF16"
},
{
"start": 1170,
"end": 1192,
"text": "(Melamud et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 1477,
"end": 1498,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 1570,
"end": 1591,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 1829,
"end": 1847,
"text": "(Yang et al., 2019",
"ref_id": "BIBREF29"
},
{
"start": 2164,
"end": 2186,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 2269,
"end": 2292,
"text": "(Artetxe et al., 2018a)",
"ref_id": "BIBREF1"
},
{
"start": 2457,
"end": 2483,
"text": "(Lample and Conneau, 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Literature Survey",
"sec_num": "2."
},
{
"text": "Haider (2018) release word embeddings for the Urdu language, which is one of the Indian languages we do not cover with this work. To evaluate the quality of embeddings, they were tested on Urdu translations of English similarity datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Literature Survey",
"sec_num": "2."
},
{
"text": "We collect pre-training data for over 14 Indian languages (from a total of 22 scheduled languages in India), including Assamese (as), Bengali (bn), Gujarati (gu), Hindi (hi), Kannada (kn), Konkani (ko), Malayalam (ml), Marathi (mr), Nepali (ne), Odiya (or), Punjabi (pa), Sanskrit (sa), Tamil (ta) and Telugu (te). These languages account for more than 95% of the entire Indian population, with the most widely spoken language, Hindi, alone contributing 43% to the figure 3 . Nonetheless, data that is readily available for computational purposes has been excruciatingly limited, even for these 14 languages. One of the major contributions of this paper is the accumulation of data in a single repository. This dataset has been collected from various sources, including ILCI corpora (Choudhary and Jha, 2011; Bansal et al., 2013) , which contains parallel aligned corpora (including English) with Hindi as the source language in tourism and health domains. As a baseline dataset, we first extract text from Wikipedia dumps 4 , and then append the data from other sources onto it. We added the aforementioned ILCI corpus, and then for Hindi, we add the monolingual corpus from HinMonoCorp 0.5 (Bojar et al., 2014) , increasing the corpus size by 44 million sentences. For Hindi, Marathi, Nepali, Bengali, Tamil, and Gujarati, we add crawled corpus of film reviews and news websites 5 . For Sanskrit, we download a raw corpus of prose 6 and add it to our corpus. Further, we describe the preprocessing and tokenization of our data.",
"cite_spans": [
{
"start": 783,
"end": 808,
"text": "(Choudhary and Jha, 2011;",
"ref_id": "BIBREF9"
},
{
"start": 809,
"end": 829,
"text": "Bansal et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 1192,
"end": 1212,
"text": "(Bojar et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Experiment Setup",
"sec_num": "3."
},
{
"text": "The corpora collected is intended to be set in general domain instead of being domain-specific, and hence we start by collecting general domain corpora via Wikimedia dumps. We also add corpora from various crawl sources to respective individual language corpus. All the corpora is then cleaned, with the first step being the removal of HTML tags and links which can occur due to the presence of crawled data. Then, foreign language sentences (including English) are removed from each corpus, so that the final pre-training corpus contains words from only its language. Along with foreign languages, numerals written in any language are also removed. Once these steps are completed, paragraphs in the corpus are split into sentences using sentence end markers such as full stop and question mark. Following this, we also remove any special characters which 3 https://en.wikipedia.org/wiki/List_of_ ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.1."
},
{
"text": "There is a prevailing scarcity of standardised benchmarks for testing the efficacy of various word embedding models for resource-poor languages. We conducted experiments across some rare standardised datasets that we could find and created new evaluation tasks as well to test the quality of non-contextual word embeddings. The Named Entity Recognition task, collected from (Murthy et al., 2018) , and FIRE 2014 workshop for NER, contains NER tagged data for 5 Indian languages, namely Hindi, Tamil, Bengali, Malayalam, and Marathi. We also use a Universal POS (UPOS), as well as an XPOS (language-specific PoS tags) tagged dataset, available from the Universal Dependency (UD) treebank (Nivre et al., 2016) , which contains POS tagged data for 4 Indian languages, Hindi, Tamil, Telugu, and Marathi. For the tasks of NER, UPOS tagging, XPOS tagging, we use the Flair library (Akbik et al., 2018) , which embeds our pre-trained embeddings as inputs for training the corresponding tagging models. The tagging models provided by Flair are vanilla BiLSTM-CRF sequence labellers. For the task of word analogy dataset, we simply use the vector addition and subtraction operators to check accuracy (i.e., v(France) \u2212 v(Paris) + v(Berlin) should be close to v(Germany)). For contextual word embeddings, we collect the statistics provided at the end of the pre-training phase to gauge the quality of the embeddings -perplexity scores for ELMo, masked language model accuracy for BERT, and so on. We report these values in Table 2 .",
"cite_spans": [
{
"start": 374,
"end": 395,
"text": "(Murthy et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 687,
"end": 707,
"text": "(Nivre et al., 2016)",
"ref_id": "BIBREF23"
},
{
"start": 875,
"end": 895,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1513,
"end": 1520,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "3.2."
},
{
"text": "In this section, we briefly describe the models created using the approaches mentioned above in the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Evaluation",
"sec_num": "4."
},
{
"text": "Word2Vec embeddings (Mikolov et al., 2013b) ",
"cite_spans": [
{
"start": 20,
"end": 43,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word2Vec (skip-gram and CBOW)",
"sec_num": "4.1."
},
{
"text": "FastText embeddings (Bojanowski et al., 2017) of dimensions {50, 100, 200, 300} (skip-gram architecture) were created using the gensim library (\u0158eh\u016f\u0159ek and Sojka, 2010) implementation of FastText. Words with a frequency less than 2 in the entire corpus are treated as unknown (out-ofvocabulary) words. For other parameters, default settings of gensim are used. Except for Konkani and Punjabi, the official repository for FastText provides pre-trained word embeddings for the Indian languages. However, we have trained our word embeddings on a much larger corpus than those used by FastText.",
"cite_spans": [
{
"start": 20,
"end": 45,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FastText",
"sec_num": "4.2."
},
{
"text": "We create GloVe embeddings (Pennington et al., 2014) of dimensions {50, 100, 200, and 300}. Words with occurrence frequency less than 2 are not included in the library. The co-occurrence matrix is created using a symmetric window of size 15. There are no pre-trained word embeddings for any of the 14 languages available with the GloVE embeddings repository 7 . We create these models and provide them with our repository.",
"cite_spans": [
{
"start": 27,
"end": 52,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GloVe",
"sec_num": "4.3."
},
{
"text": "MUSE embeddings are cross-lingual embeddings that can be trained using the fastText embeddings, which we had created previously. Due to resource constraints and the fact that cross-lingual representations require a large amount of data, we choose to train 50-dimensional embeddings for each language pair. We train for all the language pairs (14*14) and thus produce 196 models using this approach and provide them in our repository. The training for these models took 2 days over 1 x 2080Ti GPU (12 GB).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MUSE",
"sec_num": "4.4."
},
{
"text": "We train ELMo embeddings (Peters et al., 2018) of 512 dimensions. These vectors are learned functions of the internal states of a deep bidirectional language model (biLM). The training time for each language corpus was approximately 1 day on a 12 GB Nvidia GeForce GTX TitanX GPU. The batch size is reduced to 64, and the embedding model was trained on a single GPU. The number of training tokens was set to tokens multiplied by 5. We choose this parameter based on the assumption that each sentence contains an average of 4 tokens. There are no pre-trained word embeddings for any of the 14 languages available on the official repository. We provide these models in our repository. ",
"cite_spans": [
{
"start": 25,
"end": 46,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ELMo",
"sec_num": "4.5."
},
{
"text": "We train BERT (Bidirectional Encoder Representations from Transformers) embeddings (Devlin et al., 2018) of 300 dimensions. Since BERT can be used to train a single multilingual model, we combine and shuffle corpora of all languages into a single corpus and used this as the pretraining data. We use sentence piece embeddings (Google, 2018) that we trained on the corpus with a vocabulary size of 25000. Pre-training this model was completed in less than 1 day using 3 * 12 GB Tesla K80 GPUs. The official repository for BERT provides a multilingual model of 102 languages, which includes all but 4 (Oriya, Assamese, Sanskrit, Konkani) of the 14 languages. We provide a single multilingual BERT model for all the 14 languages, including these 4 languages.",
"cite_spans": [
{
"start": 83,
"end": 104,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 326,
"end": 340,
"text": "(Google, 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "4.6."
},
{
"text": "We train cross-lingual contextual BERT representation language model using the XLM git repository 8 . We train this model for 300-dimensional embeddings and over the standard hyperparameters as described with their work. The corpus vocabulary size of 25000 was chosen. We use a combined corpus of all 14 Indian languages and shuffle the sentences for data preparation of this model. We use the monolingual model (MLM) method to prepare data as described on their Git repository. This model also required the Byte-pair encoding representations as input and we train them using the standard fastBPE implementation as recommended over their Github. The training for this model took 6 days and 23 hours over 3 x V100 GPUs (16 GB each).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM",
"sec_num": "4.7."
},
{
"text": "We evaluate and compare the performance of FastText, Word2Vec, and GloVE embedding models on UPOS and XPOS datasets. The results are shown in the image 1a and in 1b, respectively. The performance of non-contextual word embedding models on NER dataset is shown in image 2. The perplexity scores for ELMo training are listed in table 2. We observe that FastText outperforms both GloVE and Word2Vec models. For Indian languages, the performance of FastText is also an indication of the fact that morphologically rich languages require embedding models with sub-word enriched information. This is clearly depicted in our evaluation. The overall size of all the aforementioned models was very large to be hosted on a Git repository. We host all of these embeddings in a downloadable ZIP format each on our server, which can be accessed via the link provided above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.8."
},
{
"text": "We have created a comprehensive set of standard word embeddings for multiple Indian languages. We release a total of 422 embedding models for 14 Indic languages. The models contain 4 varying dimensions (50, 100, 200, and 300) each of GloVE, Skipgram, CBOW, and FastText; 1 each of ELMo for every language; a single model each of BERT and XLM of all languages. They also consist of 182 cross-lingual word embedding models for each pair. However, due to the differences in language properties as well as corpora sizes, the quality of the models vary. Table 1 shows the language wise corpus statistics. Evaluation of the models has already been presented in Section 4.8.. An interesting point is that even though Tamil and Telugu have comparable corpora sizes, the evaluations of their word embeddings show different results. Telugu models consistently outperform Tamil models on all common tasks. Note that the NER tagged dataset was not available for Telugu, so they could not be compared on this task. This also serves to highlight the difference between the properties of these two languages. Even though they belong to the same language family, Dravidian, and their dataset size is the same, their evaluations show a marked difference. Each language has 3 non-contextual embeddings (word2vec-skipgram, word2vec-cbow and fasttextskipgram), and a contextual embedding (ElMo). Along with this, we have created multilingual embeddings via BERT. For BERT pre-training, the masked language model accuracy is 31.8% and next sentence prediction accuracy is 67.9%. Cross-lingual embeddings, on the other hand, have been created using XLM and MUSE.",
"cite_spans": [],
"ref_spans": [
{
"start": 549,
"end": 556,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5."
},
{
"text": "The recent past has seen tremendous growth in NLP with ElMo, BERT and XLNet being released in quick succession. All such advances have improved the state-of-the-art in various tasks like NER, Question Answering, Machine Translation, etc. However, most of these results have been presented predominantly for a single language-English. With the potential that Indian languages computing has, it becomes pertinent to perform research in word embeddings for local, low-resource languages as well. In this paper, we present the work done on creating a single repository of corpora for 14 Indian languages. We also discuss the creation of different embedding models in detail. As for our primary contribution, these word embedding models are being publicly released.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6."
},
{
"text": "In the future, we aim to refine these embeddings and do a more exhaustive evaluation over various tasks such as POS tagging for all these languages, NER for all Indian languages, including a word analogy task. Presently evaluations have been carried out on only a few of these tasks. Also, with newer embedding techniques being released in quick successions, we hope to include them in our repository. The model's parameters can be trained further for specific tasks or improving their performance in general. We hope that our work serves as a stepping stone to better embeddings for low-resource Indian languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6."
},
{
"text": "Repository Link",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://nlp.stanford.edu/projects/glove/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/facebookresearch/XLM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Contextual string embeddings for sequence labeling",
"authors": [
{
"first": "A",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2018,
"venue": "COLING 2018, 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akbik, A., Blythe, D., and Vollgraf, R. (2018). Contextual string embeddings for sequence labeling. In COLING 2018, 27th International Conference on Computational Linguistics, pages 1638-1649.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A robust self-learning method for fully unsupervised crosslingual mappings of word embeddings",
"authors": [
{
"first": "M",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Agirre",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "789--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artetxe, M., Labaka, G., and Agirre, E. (2018a). A ro- bust self-learning method for fully unsupervised cross- lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 789-798.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised statistical machine translation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Agirre",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.01272"
]
},
"num": null,
"urls": [],
"raw_text": "Artetxe, M., Labaka, G., and Agirre, E. (2018b). Unsu- pervised statistical machine translation. arXiv preprint arXiv:1809.01272.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An effective approach to unsupervised machine translation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Agirre",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.01313"
]
},
"num": null,
"urls": [],
"raw_text": "Artetxe, M., Labaka, G., and Agirre, E. (2019). An ef- fective approach to unsupervised machine translation. arXiv preprint arXiv:1902.01313.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Corpora creation for indian language technologies-the ilci project",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "G",
"middle": [
"N"
],
"last": "Jha",
"suffix": ""
}
],
"year": 2013,
"venue": "the sixth Proceedings of Language Technology Conference (LTC '13)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bansal, A., Banerjee, E., and Jha, G. N. (2013). Cor- pora creation for indian language technologies-the ilci project. In the sixth Proceedings of Language Technol- ogy Conference (LTC '13).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching word vectors with subword informa- tion. Transactions of the Association for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hindencorphindi-english and hindi-only corpus for machine translation",
"authors": [
{
"first": "O",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Diatka",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rychl\u1ef3",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Stran\u00e1k",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Suchomel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Tamchyna",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "3550--3555",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bojar, O., Diatka, V., Rychl\u1ef3, P., Stran\u00e1k, P., Suchomel, V., Tamchyna, A., and Zeman, D. (2014). Hindencorp- hindi-english and hindi-only corpus for machine transla- tion. In LREC, pages 3550-3555.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Question answering with subgraph embeddings",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Weston",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1406.3676"
]
},
"num": null,
"urls": [],
"raw_text": "Bordes, A., Chopra, S., and Weston, J. (2014). Question answering with subgraph embeddings. arXiv preprint arXiv:1406.3676.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Universal sentence encoder",
"authors": [
{
"first": "D",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "S.-Y",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "R",
"middle": [
"S"
],
"last": "John",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.11175"
]
},
"num": null,
"urls": [],
"raw_text": "Cer, D., Yang, Y., Kong, S.-y., Hua, N., Limtiaco, N., John, R. S., Constant, N., Guajardo-Cespedes, M., Yuan, S., Tar, C., et al. (2018). Universal sentence encoder. arXiv preprint arXiv:1803.11175.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Creating multilingual parallel corpora in indian languages",
"authors": [
{
"first": "N",
"middle": [],
"last": "Choudhary",
"suffix": ""
},
{
"first": "G",
"middle": [
"N"
],
"last": "Jha",
"suffix": ""
}
],
"year": 2011,
"venue": "Language and Technology Conference",
"volume": "",
"issue": "",
"pages": "527--537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Choudhary, N. and Jha, G. N. (2011). Creating multilin- gual parallel corpora in indian languages. In Language and Technology Conference, pages 527-537. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Word translation without parallel data",
"authors": [
{
"first": "A",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.04087"
]
},
"num": null,
"urls": [],
"raw_text": "Conneau, A., Lample, G., Ranzato, M., Denoyer, L., and J\u00e9gou, H. (2017). Word translation without parallel data. arXiv preprint arXiv:1710.04087.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Query expansion with locally-trained word embeddings",
"authors": [
{
"first": "F",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Craswell",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1605.07891"
]
},
"num": null,
"urls": [],
"raw_text": "Diaz, F., Mitra, B., and Craswell, N. (2016). Query ex- pansion with locally-trained word embeddings. arXiv preprint arXiv:1605.07891.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Sentence piece embeddings",
"authors": [
{
"first": "",
"middle": [],
"last": "Google",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Google. (2018). Sentence piece embeddings. https:// github.com/google/sentencepiece.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Urdu word embeddings",
"authors": [
{
"first": "S",
"middle": [],
"last": "Haider",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haider, S. (2018). Urdu word embeddings. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "G",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.07291"
]
},
"num": null,
"urls": [],
"raw_text": "Lample, G. and Conneau, A. (2019). Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Le, Q. and Mikolov, T. (2014). Distributed representations of sentences and documents. In International conference on machine learning, pages 1188-1196.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Zero-shot relation extraction via reading comprehension",
"authors": [
{
"first": "O",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.04115"
]
},
"num": null,
"urls": [],
"raw_text": "Levy, O., Seo, M., Choi, E., and Zettlemoyer, L. (2017). Zero-shot relation extraction via reading comprehension. arXiv preprint arXiv:1706.04115.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "con-text2vec: Learning generic context embedding with bidirectional lstm",
"authors": [
{
"first": "O",
"middle": [],
"last": "Melamud",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Goldberger",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 20th SIGNLL conference on computational natural language learning",
"volume": "",
"issue": "",
"pages": "51--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melamud, O., Goldberger, J., and Dagan, I. (2016). con- text2vec: Learning generic context embedding with bidi- rectional lstm. In Proceedings of the 20th SIGNLL con- ference on computational natural language learning, pages 51-61.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Cross-lingual word embeddings and the structure of the human bilingual lexicon",
"authors": [
{
"first": "P",
"middle": [],
"last": "Merlo",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Rodriguez",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "110--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Merlo, P. and Rodriguez, M. A. (2019). Cross-lingual word embeddings and the structure of the human bilin- gual lexicon. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 110-120.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013b). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111- 3119.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Improving ner tagging performance in low-resource languages via multilingual learning",
"authors": [
{
"first": "R",
"middle": [],
"last": "Murthy",
"suffix": ""
},
{
"first": "M",
"middle": [
"M"
],
"last": "Khapra",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)",
"volume": "18",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murthy, R., Khapra, M. M., and Bhattacharyya, P. (2018). Improving ner tagging performance in low-resource lan- guages via multilingual learning. ACM Transactions on Asian and Low-Resource Language Information Pro- cessing (TALLIP), 18(2):9.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Universal dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "M.-C",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Silveira",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1659--1666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nivre, J., De Marneffe, M.-C., Ginter, F., Goldberg, Y., Hajic, J., Manning, C. D., McDonald, R., Petrov, S., Pyysalo, S., Silveira, N., et al. (2016). Universal de- pendencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Unsupervised learning of sentence embeddings using compositional n-gram features",
"authors": [
{
"first": "M",
"middle": [],
"last": "Pagliardini",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jaggi",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.02507"
]
},
"num": null,
"urls": [],
"raw_text": "Pagliardini, M., Gupta, P., and Jaggi, M. (2017). Unsupervised learning of sentence embeddings us- ing compositional n-gram features. arXiv preprint arXiv:1703.02507.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pennington, J., Socher, R., and Manning, C. (2014). Glove: Global vectors for word representation. In Pro- ceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532- 1543.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "M",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep contextualized word representations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "R",
"middle": [],
"last": "Reh\u016f\u0159ek",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reh\u016f\u0159ek, R. and Sojka, P. (2010). Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta, May. ELRA. http://is.muni.cz/ publication/884893/en.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "P",
"middle": [
"V"
],
"last": "",
"suffix": ""
}
],
"year": 2003,
"venue": "In Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Bengio, R. Ducharme, P. V. (2003). A neural proba- bilistic language model. In Journal of Machine Learning Research, pages 3:1137-1155.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Q",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.08237"
]
},
"num": null,
"urls": [],
"raw_text": "Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., and Le, Q. V. (2019). Xlnet: Generalized autore- gressive pretraining for language understanding. arXiv preprint arXiv:1906.08237.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "(a) Performance on UPOS tagged dataset (b) Performance on XPOS tagged dataset",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Performance of skip-gram, CBOW, and fasttext models on NER tagged dataset",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": "may have included punctuation marks (example -hyphens, commas etc.). The statistics for the resulting corpus are listed inTable 1.",
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"3\">Language Abbr. Sentences</td><td>Words</td></tr><tr><td>Hindi</td><td>hin</td><td colspan=\"2\">48,115,256 3,419,909</td></tr><tr><td>Bengali</td><td>ben</td><td>1,563,137</td><td>707,473</td></tr><tr><td>Telugu</td><td>tel</td><td colspan=\"2\">1,019,430 1,255,086</td></tr><tr><td>Tamil</td><td>tam</td><td>881,429</td><td>1,407,646</td></tr><tr><td>Nepali</td><td>nep</td><td>705,503</td><td>314,408</td></tr><tr><td>Sanskrit</td><td>san</td><td>553,103</td><td>448,784</td></tr><tr><td>Marathi</td><td>mar</td><td>519,506</td><td>498,475</td></tr><tr><td>Punjabi</td><td>pan</td><td>503,330</td><td>247,835</td></tr><tr><td>Malayalam</td><td>mal</td><td>493,234</td><td>1,325,212</td></tr><tr><td>Gujarati</td><td>guj</td><td>468,024</td><td>182,566</td></tr><tr><td>Konkani</td><td>knn</td><td>246,722</td><td>76,899</td></tr><tr><td>Oriya</td><td>ori</td><td>112,472</td><td>55,312</td></tr><tr><td>Kannada</td><td>kan</td><td>51,949</td><td>30,031</td></tr><tr><td>Assamese</td><td>asm</td><td>50,470</td><td>29,827</td></tr></table>",
"html": null
},
"TABREF1": {
"text": "",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF2": {
"text": "of dimensions {50, 100, 200, 300} for both skip-gram and CBOW architectures are created using the gensim library (\u0158eh\u016f\u0159ek and Sojka, 2010) implementation of Word2Vec. Words with a frequency less than 2 in the entire corpus are treated as unknown (out-of-vocabulary) words. For other parameters, default settings of gensim are used. There are no pre-trained Word2Vec word embeddings for any of the 14 languages available publicly.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF4": {
"text": "ELMo prerplexity scores",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}