ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2021.repl4nlp-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:59:23.419877Z"
},
"title": "NPVec1: Word Embeddings for Nepali -Construction and Evaluation",
"authors": [
{
"first": "Pravesh",
"middle": [],
"last": "Koirala",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pulchowk Campus Lalitpur",
"location": {
"country": "Nepal"
}
},
"email": "[email protected]"
},
{
"first": "Nobal",
"middle": [
"B"
],
"last": "Niraula",
"suffix": "",
"affiliation": {
"laboratory": "Nowa Lab Madison",
"institution": "",
"location": {
"settlement": "Alabama",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word Embedding maps words to vectors of real numbers. It is derived from a large corpus and is known to capture semantic knowledge from the corpus. Word Embedding is a critical component of many state-of-the-art Deep Learning techniques. However, generating good Word Embeddings is a special challenge for low-resource languages such as Nepali due to the unavailability of large text corpus. In this paper, we present NPVec1 which consists of 25 state-of-art Word Embeddings for Nepali that we have derived from a large corpus using GloVe, Word2Vec, fastText, and BERT. We further provide intrinsic and extrinsic evaluations of these Embeddings using well established metrics and methods. These models are trained using 279 million word tokens and are the largest Embeddings ever trained for Nepali language. Furthermore, we have made these Embeddings publicly available to accelerate the development of Natural Language Processing (NLP) applications in Nepali.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Word Embedding maps words to vectors of real numbers. It is derived from a large corpus and is known to capture semantic knowledge from the corpus. Word Embedding is a critical component of many state-of-the-art Deep Learning techniques. However, generating good Word Embeddings is a special challenge for low-resource languages such as Nepali due to the unavailability of large text corpus. In this paper, we present NPVec1 which consists of 25 state-of-art Word Embeddings for Nepali that we have derived from a large corpus using GloVe, Word2Vec, fastText, and BERT. We further provide intrinsic and extrinsic evaluations of these Embeddings using well established metrics and methods. These models are trained using 279 million word tokens and are the largest Embeddings ever trained for Nepali language. Furthermore, we have made these Embeddings publicly available to accelerate the development of Natural Language Processing (NLP) applications in Nepali.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent Deep Learning (DL) techniques provide state-of-the-art performances in almost all Natural Language Processing (NLP) tasks such as Text Classification (Conneau et al., 2016; Yao et al., 2019; Zhou et al., 2015) , Question Answering (Peters et al., 2018; Devlin et al., 2018) , Named Entity Recognition (Huang et al., 2015; Lample et al., 2016) and Sentiment Analysis (Zhang et al., 2018; Severyn and Moschitti, 2015) . DL techniques are attractive due to their capacity of learning complex and intricate features automatically from the raw data (Li et al., 2020) . This significantly reduces the required time and effort for feature engineering, a costly step in traditional feature-based approaches which further requires considerable amount of engineering and domain expertise. Thus, DL techniques are very useful for low-resource languages such as Nepali.",
"cite_spans": [
{
"start": 157,
"end": 179,
"text": "(Conneau et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 180,
"end": 197,
"text": "Yao et al., 2019;",
"ref_id": "BIBREF35"
},
{
"start": 198,
"end": 216,
"text": "Zhou et al., 2015)",
"ref_id": "BIBREF37"
},
{
"start": 238,
"end": 259,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 260,
"end": 280,
"text": "Devlin et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 308,
"end": 328,
"text": "(Huang et al., 2015;",
"ref_id": "BIBREF10"
},
{
"start": 329,
"end": 349,
"text": "Lample et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 373,
"end": 393,
"text": "(Zhang et al., 2018;",
"ref_id": "BIBREF36"
},
{
"start": 394,
"end": 422,
"text": "Severyn and Moschitti, 2015)",
"ref_id": "BIBREF30"
},
{
"start": 551,
"end": 568,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many Deep Learning techniques require Word Embeddings to represent each word by a vector of real numbers. Word Embeddings learn a meaningful representation of words directly from a large unlabeled corpus using co-occurrence statistics . The closer the word representations to actual meanings, the better the performance. Consequently, Word Embeddings have received special attention from the research community and are predominantly used in current NLP researches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Word Embeddings can generally be divided into two categories: Context-Independent embeddings such as GloVe (Pennington et al., 2014) , Word2Vec (Mikolov et al., 2013) , and fastText , and Context-Dependent embeddings such as BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2018) and ELMo (Embeddings from Language Models) (Peters et al., 2018) . Context-dependent word embedding is generated for a word as a function of the sentence it occurs in. Thus, it can learn multiple representations for polysemous words (Peters et al., 2018) . To learn these deep contextualized representations, BERT uses a transformer based architecture pretrained on Masked Language Modelling and Next Sentence Prediction tasks, whereas, ELMo uses a Bidirectional LSTM architecture for combining both forward and backward language models.",
"cite_spans": [
{
"start": 107,
"end": 132,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF26"
},
{
"start": 144,
"end": 166,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF22"
},
{
"start": 288,
"end": 309,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 353,
"end": 374,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF27"
},
{
"start": 537,
"end": 564,
"text": "words (Peters et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present NPVec1, a suite of Word Embedding resources for Nepali, a lowresource language, which is the official language and de-facto lingua franca of Nepal. It is spoken by more than 20 million people mainly in Nepal and many other places in the world including Bhutan, India, and Myanmar (Niraula et al., 2020) . Even though Word Embeddings can be directly learned from raw texts in an unsupervised fashion, gathering a large amount of data for its training remains a huge challenge in itself for a low-resource lan-guage such as Nepali. In addition, Nepali is a morphologically rich language which has multiple agglutinative suffixes as well as affix inflections and thus proves challenges during its preprocessing i.e. tokenization, normalization and stemming.",
"cite_spans": [
{
"start": 279,
"end": 328,
"text": "Bhutan, India, and Myanmar (Niraula et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We have collected data over many years and combined it with multiple other publicly available data sets to generate a suite of Word Embeddings, i.e. NPVec1, using GloVe, Word2Vec, fastText and BERT. It consists of 25 Word Embeddings corresponding to different preprocessing schemes. In addition, we perform the intrinsic and extrinsic evaluations of the generated Word Embeddings using well established methods and metrics. Our pretrained Embedding models and resources are made publicly available 1 for the acceleration and development of NLP research and application in Nepali language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The novel contributions of this study are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 First formal analyses of different Word Embeddings in Nepali language using intrinsic and extrinsic methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 First study of effects of preprocessing such as normalization, tokenization and stemming in different Word Embeddings in Nepali language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 First contextualized word embedding (BERT) generation and evaluation in Nepali language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The largest Word2Vec, GloVe, fastText and BERT based Word Embeddings ever trained and made available for Nepali language to date.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. We review related works in Section 2. We describe the data collection and corpus construction in Section 3. We describe our experiments to develop Word Embedding methods in Section 4. We present model evaluations in Section 5 and conclusion and future directions in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Word Embeddings provide continuous word representations and are the building blocks of many NLP applications. They capture distributional information of words from a large corpora. This information helps the generalization of machine learning models especially when the data set is limited . Word Embedding tools, technologies and pre-trained models are widely available for resource rich languages such as English (Mikolov et al., 2013; Pennington et al., 2014; and Chinese (Li et al., 2018; Chen et al., 2015) . Due to the wide use of Word Embeddings, pre-trained models are increasingly available for resource poor languages such as Portuguese (Hartmann et al., 2017) , Arabic (Elrazzaz et al., 2017; Soliman et al., 2017) , and Bengali (Ahmad and Amin, 2016) .",
"cite_spans": [
{
"start": 415,
"end": 437,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF22"
},
{
"start": 438,
"end": 462,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF26"
},
{
"start": 475,
"end": 492,
"text": "(Li et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 493,
"end": 511,
"text": "Chen et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 647,
"end": 670,
"text": "(Hartmann et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 680,
"end": 703,
"text": "(Elrazzaz et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 704,
"end": 725,
"text": "Soliman et al., 2017)",
"ref_id": "BIBREF31"
},
{
"start": 740,
"end": 762,
"text": "(Ahmad and Amin, 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Most Word Embedding algorithms are unsupervised. Which means that they can be trained for any language as long as the corpus data is available. One such effort is by Grave et al. (2018) who generated and made available word vectors for 157 languages, including Nepali, using Wikipedia and Common Crawl data. The pre-trained models for Skip-gram and CBOW are available at https: //fasttext.cc. Another useful resource is http: //vectors.nlpl.eu/repository which is a community repository for Word Embeddings maintained by Language Technology Group at the University of Oslo (Kutuzov et al., 2017) . It currently hosts 209 pre-trained word Embeddings for most languages but not Nepali.",
"cite_spans": [
{
"start": 166,
"end": 185,
"text": "Grave et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 573,
"end": 595,
"text": "(Kutuzov et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Word Embeddings for Nepali are derived in small scale by Grave et al. (2018) using fastText and by Lamsal (2019) using Word2Vec. Both of these efforts have major limitations. First, they have limited diversity in the corpus. Grave et al. use Wikipedia and Common Crawl data while Lamsal uses news corpus. Second, their corpus is very small compared to ours (Section 3). Third, they do not provide any evaluation of the generated models. Fourth, they have done limited or no prepossessing on the data. We show later in Section 3.3 that tokenization and text normalization are critical for processing morphologically rich Nepali text. In contrast, we have conducted a large scale study of Word Embeddings in more diverse and large data sets using GloVe, fastText, and Word2Vec. Our corpus is nearly four times bigger than the corpus used by aforementioned approaches (see Section 3). We have constructed 8 inputs for each combination of binary variables: Tokenization, Normalization and Stemming which has resulted in 24 pre-trained Embeddings for GloVe, Word2Vec, and fastText combined. Additionally, we have trained BERT for one of these preprocess-ing schemes and performed intrinsic and extrinsic evaluations for each of these 25 models.",
"cite_spans": [
{
"start": 57,
"end": 76,
"text": "Grave et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "In this Section, we present our data sources and preprocessing techniques for the corpus. To help readers understand the Nepali words used in this paper, we have provided a gloss in Section 8 with their transliterations and English translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Preparation",
"sec_num": "3"
},
{
"text": "Our corpus consists of a mixture of news, Wikipedia articles, and OSCAR (Ortiz Su\u00e1rez et al., 2019) corpus. We summarize the data sets in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 138,
"end": 145,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "The Corpus",
"sec_num": "3.1"
},
{
"text": "We crawled Nepali online news media over a year and collected more than 700,000 unique news articles (\u223c 3GB). As expected, the news articles cover diverse topics including politics, sports, technology, society, and so on. We obtained another news data set from IEEE DataPort (Lamsal, 2020) (1.7GB).",
"cite_spans": [
{
"start": 275,
"end": 289,
"text": "(Lamsal, 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "News Corpus",
"sec_num": "3.1.1"
},
{
"text": "We obtained the shuffled data in deduplicated form (1.2GB) for Nepali language from OSCAR (Open Super-large Crawled ALMAnaCH coRpus) (Ortiz Su\u00e1rez et al., 2019). 2 It is a large multilingual corpus obtained by language classification and filtering of the Common Crawl corpus. Common Crawl 3 is a non-profit organization which collects data through web crawling and makes it publicly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OSCAR Nepali Corpus",
"sec_num": "3.1.2"
},
{
"text": "We obtained Nepali Wikipedia corpus from Kaggle (Gaurav, 2020). It consists of 39k Wikipedia articles for Nepali (83MB).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nepali Wikipedia Corpus",
"sec_num": "3.1.3"
},
{
"text": "We collected data from multiple sources which might have crawled the same data. Furthermore, there were some boilerplate text in the data. Thus, it was important to remove duplicate texts from the corpus. To remove these duplicates, we followed an approach similar to Grave et al. (2018) . With this approach, we computed hash for each sentence and collected the sentence only if the hash was not known before. We were able to remove \u223c 22% duplicated sentences from our corpus.",
"cite_spans": [
{
"start": 268,
"end": 287,
"text": "Grave et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Deduplication",
"sec_num": "3.2"
},
{
"text": "After removing duplicates, we discarded sentences with less than 10 characters as they provide little context to learn Word Embeddings. We also removed punctuations and replaced numbers with a special NN token. We then applied following Normalization, Tokenization and Stemming preprocessing techniques to derive corpus for the study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.3"
},
{
"text": "Analogous to how there are different cases (lower/upper) in English with no phonetic differences, there are different written vowels sounds in Nepali which, when spoken, are indistinguishable from each other. For example: the two different words \u0928 \u092a\u093e\u0932 (Nepali) and \u0928 \u092a\u093e \u0932 are spoken the same way even though their written representations differ. Thus, people often mistakenly use multiple written version of the same words which introduces noise in the data set. Normalization, in the context of this study, is identification of all these nuances and mapping them to a same word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization",
"sec_num": "3.3.1"
},
{
"text": "Nepali language has multiple post-positional and agglutinative suffixes like \u0932 , \u092e\u093e, \u092c\u093e\u091f, \u0926 \u0916 etc., which can be compounded together with nouns and pronouns to produce new words. For example, the word \u0928 \u092a\u093e\u0932 (Nepalese) can be compounded as \u0928 \u092a\u093e\u0932 \u0932 (Nepalese did), \u0928 \u092a\u093e\u0932 \u0939 (Nepalese+plural), \u0928 \u092a\u093e\u0932 \u0915\u094b (Of Nepalese), so on and so forth. Thus, these different words can be tokenized as \u0928 \u092a\u093e\u0932 + \u0932 , \u0928 \u092a\u093e\u0932 + \u0939 , \u0928 \u092a\u093e\u0932 + \u0915\u094b which serves to drastically reduce the vocabulary size without the loss of any linguistic functionality. Tokenization, in this context, means the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization",
"sec_num": "3.3.2"
},
{
"text": "In addition, there are also other case markers and bound suffixes that primarily inflect verbs to produce new words. For example, from the same root word \u0916\u093e (eat), words such as \u0916\u093e\u092f\u094b (ate), \u0916\u093e \u0926 (eating), \u0916\u093e\u090f\u0915\u094b (had eaten), \u0916\u093e\u090f\u0930 (after eating), etc can be constructed. Stemming, in this context, means the reduction of all such inflected words to their base forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stemming",
"sec_num": "3.3.3"
},
{
"text": "For the purpose of this study, we have improved upon the preprocessing techniques developed by Koirala and Shakya (2018) corpus. Specifically, we generated eight corpus corresponding to different combination of these three preprocessing techniques. The final eight corpus are listed in Table 2 .",
"cite_spans": [
{
"start": 95,
"end": 120,
"text": "Koirala and Shakya (2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 286,
"end": 293,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Stemming",
"sec_num": "3.3.3"
},
{
"text": "We chose three state-of-the-art methods for obtaining context-independent Word Embeddings, namely Word2vec, fastText and GloVe. Word embeddings from these methods were learned with the same parameters for fair comparison. We fixed vector dimension to 300 and set minimum word frequency, window size, and the negative sampling size to 5 respectively. Word2vec and fastText models were trained via the Gensim (\u0158eh\u016f\u0159ek and Sojka, 2010) implementation using skipgram method. Whereas, GloVe embeddings were trained via the tool provided by StanfordNLP 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-independent Word Embeddings",
"sec_num": "4.1"
},
{
"text": "We chose BERT to learn context-dependent embeddings. We trained a BERT model using the Huggingface's transformers library (Wolf et al., 2019) . BERT model, unlike the other word embedding models, was only trained in one pre-processing scheme i.e. 4 https://github.com/stanfordnlp/GloVe base+normalized+tokenized (BNT) 5 due to resource constraints. Due to the same reason, we reduced both the number of hidden layers and the attention heads to 6 and the hidden dimensions to 300 unlike the original implementation of 12 hidden layers and attention heads and 768 hidden dimensions. The maximum sequence size was chosen to be 512 whereas maximum vocabulary size for the BERT's wordpiece tokenizer was set to 30,000. Our implementation of BERT has 22.5M parameters (in contrast to the 110M parameters of the original implementation i.e. BERT-base) and unlike BERT's original implementation, where it is pre-trained on the task of Masked Language Modelling (MLM) and Next Sentence Prediction, we only pre-trained it for the MLM objective for just a single epoch due to limited computing resources.",
"cite_spans": [
{
"start": 122,
"end": 141,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 247,
"end": 248,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context-dependent Word Embeddings",
"sec_num": "4.2"
},
{
"text": "Intrinsic evaluation of word embedding models is commonly performed in tasks such as analogies (Grave et al., 2018) . There is, however, no such data set available for Nepali language. Thus, we followed the clustering approach suggested in (Soliman et al., 2017) which requires a manually constructed data set of terms in different themes (clusters). The goal then is to recover these themes (clusters) using the learned word representations. We constructed following two data sets for the evaluation purposes.",
"cite_spans": [
{
"start": 95,
"end": 115,
"text": "(Grave et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Evaluation",
"sec_num": "5.1"
},
{
"text": "This set consisted of twenty one word examples each from two different topics i.e. kitchen and nature. The kitchen topic included words such as \u091a \u0928 (sugar) \u0928 \u0928 (salt) \u092d\u093e\u0921\u094b (pot) etc. whereas, the nature topic included words such as \u0939\u092e\u093e\u0932 (mountain), \u092a\u0939\u093e\u0921 (hill), \u0916\u094b\u0932\u093e (river) etc. The Relatedness data set is presented in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 321,
"end": 328,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Relatedness Set",
"sec_num": "5.1.1"
},
{
"text": "This set consisted of nineteen examples each of positive and negative sentiments. The positive sentiment set included words such as \u0930\u093e \u094b (good), \u0920 \u0932\u094b (big), \u093e\u092f(justice), etc. whereas the negative sentiment set included their antonyms such as \u0928\u0930\u093e \u094b (bad), \u0938\u093e\u0928\u094b(small), \u0905 \u093e\u092f(injustice) etc. The Sentiment data set is presented in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 328,
"end": 335,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Sentiment Set",
"sec_num": "5.1.2"
},
{
"text": "Ideally word embeddings should capture both word relatedness and word similarity properties of a word. These two terms are related but are not the same Banjade et al., 2015) . For example, chicken and egg are less similar (living vs non-living) but are highly related as they often appear together. Relatedness and Sentiment sets were developed to evaluate the models in these these two aspects.",
"cite_spans": [
{
"start": 152,
"end": 173,
"text": "Banjade et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Set",
"sec_num": "5.1.2"
},
{
"text": "For each of these cases (sentiment and relatedness), K-Means clustering was applied to the constituent words to generate two clusters (i.e. K=2). The obtained clusters were evaluated using the purity metric which is further elaborated in Section 5.1.3. Since Word2Vec and GloVe cannot handle out-of-vocabulary (OOV) words, unlike fastText and BERT, the average of all corresponding word vectors were used to represent the OOV words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Set",
"sec_num": "5.1.2"
},
{
"text": "While Word2Vec, fastText and GloVe models provide a simple word to vector mapping, BERT's learned representations are a bit different and thus, need to be extracted accordingly. For the sake of simplicity, we have averaged the hidden state of the last two hidden layers to get the embeddings for each word token. The words were run without any context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Set",
"sec_num": "5.1.2"
},
{
"text": "The purity metric is an extrinsic cluster evaluation technique (Manning et al., 2008) which requires a gold standard data set. It measures the extent to which a cluster contains homogeneous elements. The purity metric ranges from 0 (bad clustering) to 1 (perfect clustering). Thus, the higher the purity score, the better the results.",
"cite_spans": [
{
"start": 63,
"end": 85,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Purity",
"sec_num": "5.1.3"
},
{
"text": "The results for the intrinsic evaluations are listed in Table 4 . All models performed better in recovering original clusters in the Relatedness Set compared to that of the Sentiment Set i.e. they have higher purity scores in the Relatedness Set than the Sentiment Set. This is expected as semantically opposite words often appear in a very similar context (e.g. This is a new model vs. This is an old model). Relying on neighboring terms alone would provide little context to capture the semantic meaning of a word. Of all three models, however, GloVe performed the best in the sentiment set by an average of 10% (except in the BNTS scheme). This seem to make it more suitable for tasks such as Sentiment Analyses. Interestingly, BERT model did not perform well compared to other models in the Relatedness set. It, however, provided very competitive score in the Sentiment Set.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 63,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results for Intrinsic Evaluation",
"sec_num": "5.1.4"
},
{
"text": "Models in the BNT scheme scored highest in both of the intrinsic data sets. Purity for relatedness task for all of the three models in this scheme was 1 whereas GloVe model obtained the global best score of 0.69 in the sentiment set in this scheme. In general, it seems that applying the Normalization scheme has a positive effect on model's capacity to learn the representation which makes sense because Normalization reduces differently spelled versions of the same word to a single representation. Purity dropped significantly for all tasks in all schemes that included Stemming. This may be attributed to the possible over-stemming of the words (under-stemming doesn't seem to be a problem because the model is performing well in the Base scheme).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Intrinsic Evaluation",
"sec_num": "5.1.4"
},
{
"text": "The primary objective of extrinsic evaluation for this study was to compare how the word embeddings helped generalize the training of other supervised models with very few data labels. For this purpose, a feed-forward neural network architecture was used for a classification objective in a multi-class classification setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "5.2"
},
{
"text": "The data set for classification was derived from a publicly available Github repository i.e. Nepali News Dataset 6 . It consists of Nepali news articles in 10 different categories. Each category has 1000 articles. As mentioned, the goal of extrinsic evaluation here is to see how the learned word representations help the generalization of machine learning model for text classification task when limited training data set is available, a practical scenario for low resource language. If we use large training examples, virtually any classifier would learn to perform better even if the word representations are poor. For this reason, we extracted 3000 samples from the dataset with uniform representation from each categories (i.e. 300 examples each) and further split them randomly into chunks of sizes 10%, 10%, and 80% each. This yielded us examples of sizes 313, 326, and 2361 respectively which were subsequently used for training, validation and testing purposes. Training set had at least 21 examples per class whereas the testing set had at least 227 examples per class. The test set was deliberately chosen to be larger to better estimate the generalization of the classification model across different 6 https://github.com/kamalacharya2044/ NepaliNewsDataset embedding schemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.2.1"
},
{
"text": "We implemented a very simple text classification model using Keras 7 . For each example (news article), we only used the first five hundred tokens and obtained their embedding vectors from the word embedding model under the study. These vectors were then fed to a Keras model where they were first pooled together by a one-dimensional averaging layer and then passed to a hidden layer with 64 units with the ReLU activation and then to the output layer of 10 units with Sigmoid activation. Binary crossentropy function was used to calculate the loss and the model was trained using the Adam Optimizer (Kingma and Ba, 2014) for 60 epochs each. In case of BERT, we averaged the hidden states from the last two hidden layers to get the embeddings, whereas, for getting the baseline results, instead of using any pre-trained word vectors, a trainable Keras embedding layer was used in front of the architecture mentioned above which automatically learns the word embeddings by only using the provided training examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "5.2.2"
},
{
"text": "Macro Precision, Recall and F 1 metrics were used for the evaluation of the classification model. On average, the F 1 scores for word embedding models exceeded the baseline scores by a margin of 5 percent. This suggests that the use of pre-trained word embeddings helps to generalize classification models better than simply using the embeddings learned from the training set. Interestingly, the global maximum F 1 score was obtained in the Base scheme i.e. with no preprocessing applied, and Normalization seemed to make no difference to the score. This can be attributed to the fact that our data set came from highly reputed newspapers i.e. all word spellings were grammatically correct. We foresee significant increase due to Normalization in data sets such as tweets, social media posts and blogs where grammatical errors are more frequent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Extrinsic Evaluation",
"sec_num": "5.2.3"
},
{
"text": "Similarly, Tokenization schemes seemed to drop the classification scores for embedding models but increase the scores for the baseline models in general. This leads us to believe that the representations of the post-positions and agglunitative suffixes, which are the most frequently occurring words in Nepali language, learned by the Word Embedding models may be partial to particular top-ics. We suggest the omission of post-positions and other frequently occurring words from the data set before using these embeddings in a classification setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Extrinsic Evaluation",
"sec_num": "5.2.3"
},
{
"text": "The standard deviation in the F-scores of Word2Vec model, fastText and GloVe model across the different pre-processing schemes are 2.4%, 1% and 1.4% respectively, which suggests that fastText might be more resilient to problems like over-stemming. We thus recommend the usage of fastText models in applications where it is desirable to stem words. Interestingly, BERT model, while produced competitive results, did not exceed our expectations on the classification task. We expect a raise in performance of this model if trained in the architecture proposed in its original implementation i.e. 12 attention heads and 12 hidden layers unlike our slimmed down version of 6 attention heads and 6 hidden layers trained for only one epoch. Training on more data and with more epochs are potential future directions to this end.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Extrinsic Evaluation",
"sec_num": "5.2.3"
},
{
"text": "In this paper, we trained 25 Word Embedding models for Nepali language with multiple preprocessing schemes and made them publicly available for accelerating NLP research in low-resource language Nepali 8 . This, to our knowledge, is the first formal and large scale study of Word Embeddings in Nepali. We compared the performances of these models using intrinsic and extrinsic evaluation tasks. Our findings clearly indicate that these word embedding models perform exceptionally well in identifying related words compared to discovering semantically similar words. We also suggest that further comparisons be made with an improved stemmer, which has fewer over-stemming error rates than what we've used, to study the effects of over-stemming in word embeddings. Performance of these Word Embeddings in clustering of related words also suggest us that these models will obtain good results in tasks such as Named Entity Recognition and POS Tagging. This is something that we would like to explore in future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "As far as our study with BERT goes, we obviously recommend training the original BERT architecture, rather than what we have used, with more data. For comparison, the original BERT model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "https://github.com/nowalab/ nepali-word-embeddings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://oscar-corpus.com 3 https://commoncrawl.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our motivation for training BERT in this scheme was the superior performance of context-independent word embeddings in our intrinsic evaluation task for this particular scheme as per section 5.1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://keras.io",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/nowalab/ nepali-word-embeddings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "was trained on a total of 3.3 billion words whereas we've trained our model in just 360 million words. Unfortunately, for a resource poor language like Nepali, this is not a trivial task. Similarly, it would be most interesting to see performances of other context-dependent embedding models such as ELMo, GPT2 (Radford et al., 2019) , XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019) in case of Nepali language.",
"cite_spans": [
{
"start": 311,
"end": 333,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 342,
"end": 361,
"text": "(Yang et al., 2019)",
"ref_id": null
},
{
"start": 374,
"end": 392,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "We would like acknowledge Mr. Ganesh Pandey for his valuable contribution in providing the hardware setup for our experiments. Similarly, we thank Ms. Samiksha Bhattarai and Dr. Diwa Koirala for their continued support and encouragement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "7"
},
{
"text": "Meaning ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Glossary Original Transliteration",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bengali word embeddings and it's application in solving document classification problem",
"authors": [
{
"first": "Adnan",
"middle": [],
"last": "Ahmad",
"suffix": ""
},
{
"first": "Mohammad Ruhul",
"middle": [],
"last": "Amin",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 19th International Conference on Computer and Information Technology (ICCIT)",
"volume": "",
"issue": "",
"pages": "425--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adnan Ahmad and Mohammad Ruhul Amin. 2016. Bengali word embeddings and it's application in solving document classification problem. In 2016 19th International Conference on Computer and In- formation Technology (ICCIT). IEEE, 425-430.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Lemon and tea are not similar: Measuring word-to-word similarity by combining different methods",
"authors": [
{
"first": "Rajendra",
"middle": [],
"last": "Banjade",
"suffix": ""
},
{
"first": "Nabin",
"middle": [],
"last": "Maharjan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Nobal",
"suffix": ""
},
{
"first": "Vasile",
"middle": [],
"last": "Niraula",
"suffix": ""
},
{
"first": "Dipesh",
"middle": [],
"last": "Rus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gautam",
"suffix": ""
}
],
"year": 2015,
"venue": "International conference on intelligent text processing and computational linguistics",
"volume": "",
"issue": "",
"pages": "335--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajendra Banjade, Nabin Maharjan, Nobal B Niraula, Vasile Rus, and Dipesh Gautam. 2015. Lemon and tea are not similar: Measuring word-to-word similar- ity by combining different methods. In International conference on intelligent text processing and compu- tational linguistics. Springer, 335-346.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics 5 (2017), 135- 146.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Joint learning of character and word embeddings",
"authors": [
{
"first": "Xinxiong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
}
],
"year": 2015,
"venue": "Twenty-Fourth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, and Huanbo Luan. 2015. Joint learning of charac- ter and word embeddings. In Twenty-Fourth Interna- tional Joint Conference on Artificial Intelligence.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Very deep convolutional networks for text classification",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01781"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Holger Schwenk, Lo\u00efc Barrault, and Yann Lecun. 2016. Very deep convolutional networks for text classification. arXiv preprint arXiv:1606.01781 (2016).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805 (2018).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Methodical evaluation of Arabic word embeddings",
"authors": [
{
"first": "Mohammed",
"middle": [],
"last": "Elrazzaz",
"suffix": ""
},
{
"first": "Shady",
"middle": [],
"last": "Elbassuoni",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "454--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammed Elrazzaz, Shady Elbassuoni, Khaled Sha- ban, and Chadi Helwe. 2017. Methodical evaluation of Arabic word embeddings. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers). 454- 458.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Gaurav. 2020. Nepali Wikipedia Corpus",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaurav. 2020. Nepali Wikipedia Cor- pus.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning Word Vectors for 157 Languages",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learn- ing Word Vectors for 157 Languages. In Proceed- ings of the International Conference on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Portuguese word embeddings: Evaluating on word analogies and natural language tasks",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Shulby",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Rodrigues",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Aluisio",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.06025"
]
},
"num": null,
"urls": [],
"raw_text": "Nathan Hartmann, Erick Fonseca, Christopher Shulby, Marcos Treviso, Jessica Rodrigues, and Sandra Aluisio. 2017. Portuguese word embeddings: Eval- uating on word analogies and natural language tasks. arXiv preprint arXiv:1708.06025 (2017).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bidirectional LSTM-CRF models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Nepali Rule Based Stemmer and its performance on different NLP applications",
"authors": [
{
"first": "Pravesh",
"middle": [],
"last": "Koirala",
"suffix": ""
},
{
"first": "Aman",
"middle": [],
"last": "Shakya",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 4th International IT Conference on ICT with Smart Computing and 9th National Students' Conference on Information Technology",
"volume": "",
"issue": "",
"pages": "16--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pravesh Koirala and Aman Shakya. 2018. A Nepali Rule Based Stemmer and its performance on differ- ent NLP applications. In Proceedings of the 4th In- ternational IT Conference on ICT with Smart Com- puting and 9th National Students' Conference on In- formation Technology, (NaSCoIT 2018). 16-20.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Word vectors, reuse, and replicability: Towards a community repository of largetext resources",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Kutuzov",
"suffix": ""
},
{
"first": "Murhaf",
"middle": [],
"last": "Fares",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 58th Conference on Simulation and Modelling. Link\u00f6ping University Electronic Press",
"volume": "",
"issue": "",
"pages": "271--276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Kutuzov, Murhaf Fares, Stephan Oepen, and Erik Velldal. 2017. Word vectors, reuse, and repli- cability: Towards a community repository of large- text resources. In Proceedings of the 58th Confer- ence on Simulation and Modelling. Link\u00f6ping Uni- versity Electronic Press, 271-276.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.01360"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 (2016).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "300-Dimensional Word Embeddings for Nepali Language",
"authors": [
{
"first": "Rabindra",
"middle": [],
"last": "Lamsal",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.21227/dz6s-my90"
]
},
"num": null,
"urls": [],
"raw_text": "Rabindra Lamsal. 2019. 300-Dimensional Word Em- beddings for Nepali Language. https://doi.org/ 10.21227/dz6s-my90",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A survey on deep learning for named entity recognition",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jianglei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Chenliang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li. 2020. A survey on deep learning for named entity recognition. IEEE Transactions on Knowledge and Data Engineering (2020).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Analogical Reasoning on Chinese Morphological and Semantic Relations",
"authors": [
{
"first": "Shen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Renfen",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Wensi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaoyong",
"middle": [],
"last": "Du",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "138--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shen Li, Zhe Zhao, Renfen Hu, Wensi Li, Tao Liu, and Xiaoyong Du. 2018. Analogical Reasoning on Chi- nese Morphological and Semantic Relations. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguis- tics, 138-143.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692 (2019).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Introduction to Information Retrieval",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to Information Retrieval. Cambridge University Press. ISBN 978- 0-521-86571-5.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Advances in pre-training distributed word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Puhrsch",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1712.09405"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2017. Ad- vances in pre-training distributed word representa- tions. arXiv preprint arXiv:1712.09405 (2017).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their composition- ality. In Advances in neural information processing systems. 3111-3119.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Saurab Dulal, and Diwa Koirala. 2020. Linguistic Taboos and Euphemisms in Nepali",
"authors": [
{
"first": "B",
"middle": [],
"last": "Nobal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Niraula",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.13798"
]
},
"num": null,
"urls": [],
"raw_text": "Nobal B Niraula, Saurab Dulal, and Diwa Koirala. 2020. Linguistic Taboos and Euphemisms in Nepali. arXiv preprint arXiv:2007.13798 (2020).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Combining word representations for measuring word relatedness and similarity",
"authors": [
{
"first": "Dipesh",
"middle": [],
"last": "Nobal Bikram Niraula",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gautam",
"suffix": ""
}
],
"year": 2015,
"venue": "The twenty-eighth international flairs conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nobal Bikram Niraula, Dipesh Gautam, Rajendra Ban- jade, Nabin Maharjan, and Vasile Rus. 2015. Com- bining word representations for measuring word re- latedness and similarity. In The twenty-eighth inter- national flairs conference.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures",
"authors": [
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": 2019,
"venue": "7th Workshop on the Challenges in the Management of Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.14618/IDS-PUB-9021"
]
},
"num": null,
"urls": [],
"raw_text": "Pedro Javier Ortiz Su\u00e1rez, Beno\u00eet Sagot, and Laurent Romary. 2019. Asynchronous Pipeline for Process- ing Huge Corpora on Medium to Low Resource In- frastructures. In 7th Workshop on the Challenges in the Management of Large Corpora (CMLC-7), Pi- otr Ba\u0144ski, Adrien Barbaresi, Hanno Biber, Eve- lyn Breiteneder, Simon Clematide, Marc Kupietz, Harald L\u00fcngen, and Caroline Iliadi (Eds.). Leibniz- Institut f\u00fcr Deutsche Sprache, Cardiff, United King- dom. https://doi.org/10.14618/IDS-PUB-9021",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP). 1532-1543.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018).",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog 1, 8 (2019), 9.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Radim",
"middle": [],
"last": "\u0158eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. ELRA",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim \u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. ELRA, Valletta, Malta, 45-50. http://is.muni.cz/publication/ 884893/en.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Twitter sentiment analysis with deep convolutional neural networks",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "959--962",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2015. Twitter sentiment analysis with deep convolutional neural networks. In Proceedings of the 38th Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval. 959-962.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Aravec: A set of arabic word embedding models for use in arabic nlp",
"authors": [
{
"first": "Kareem",
"middle": [],
"last": "Abu Bakr Soliman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eissa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Samhaa R El-Beltagy",
"suffix": ""
}
],
"year": 2017,
"venue": "Procedia Computer Science",
"volume": "117",
"issue": "",
"pages": "256--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abu Bakr Soliman, Kareem Eissa, and Samhaa R El- Beltagy. 2017. Aravec: A set of arabic word embed- ding models for use in arabic nlp. Procedia Com- puter Science 117 (2017), 256-265.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "HuggingFace's Transformers: State-of-the-art Natural Language Processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. HuggingFace's Transformers: State-of-the-art Nat- ural Language Processing. ArXiv abs/1910.03771 (2019).",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "",
"middle": [],
"last": "Xlnet",
"suffix": ""
}
],
"year": null,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural infor- mation processing systems. 5753-5763.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Graph convolutional networks for text classification",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chengsheng",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "7370--7377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 7370-7377.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Deep learning for sentiment analysis: A survey",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shuai",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Zhang, Shuai Wang, and Bing Liu. 2018. Deep learning for sentiment analysis: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowl- edge Discovery 8, 4 (2018), e1253.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A C-LSTM neural network for text classification",
"authors": [
{
"first": "Chunting",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chonglin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Lau",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.08630"
]
},
"num": null,
"urls": [],
"raw_text": "Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and Fran- cis Lau. 2015. A C-LSTM neural network for text classification. arXiv preprint arXiv:1511.08630 (2015).",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"content": "<table/>",
"html": null,
"text": "for preprocessing (normalizing, tokenizing and stemming) our Corpus Tokens Types Genre Description Our News Corpus 216M 3.3M News Online news Lamsal (Lamsal, 2020) 58.8M 1.2M News Online news OSCAR (Ortiz Su\u00e1rez et al., 2019) 71.8M 2.2M Mixed Mixed Genre Wikipedia (Gaurav, 2020) 5.1M 0.3M Mixed Mixed Genre",
"type_str": "table"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td colspan=\"2\">: Corpus Description</td><td/><td/></tr><tr><td>Preprocessing Scheme</td><td>Code</td><td colspan=\"2\">#Tokens #Types</td></tr><tr><td>Base</td><td>(B)</td><td>279M</td><td>3.14M</td></tr><tr><td>Base+Normalized</td><td>(BN)</td><td>279M</td><td>2.6M</td></tr><tr><td colspan=\"2\">Base+Normalized+Tokenized (BNT)</td><td>360M</td><td>1.4M</td></tr><tr><td colspan=\"2\">Base+Normalized+Stemmed (BNS)</td><td>279M</td><td>2.04M</td></tr><tr><td colspan=\"2\">Base+Normalized+Tokenized+Stemmed (BNTS)</td><td>359M</td><td>1.09M</td></tr><tr><td>Base+Tokenized</td><td>(BT)</td><td>357M</td><td>1.8M</td></tr><tr><td colspan=\"2\">Base+Tokenized+Stemmed (BTS)</td><td>357M</td><td>1.4M</td></tr><tr><td>Base+Stemmed</td><td>(BS)</td><td>279M</td><td>2.5M</td></tr></table>",
"html": null,
"text": "",
"type_str": "table"
},
"TABREF2": {
"num": null,
"content": "<table/>",
"html": null,
"text": "",
"type_str": "table"
},
"TABREF3": {
"num": null,
"content": "<table/>",
"html": null,
"text": "Data Set for Intrinsic Evaluation of Word Embeddings",
"type_str": "table"
},
"TABREF5": {
"num": null,
"content": "<table/>",
"html": null,
"text": "Intrinsic and Extrinsic Results. Sen and Rel refer to Sentiment and Relatedness respectively. Similarly, B=Base i.e. Raw Text, N=Normalized, T=Tokenized, and S=Stemmed.",
"type_str": "table"
}
}
}
}