ACL-OCL / Base_JSON /prefixN /json /nlptea /2020.nlptea-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:47:37.084540Z"
},
"title": "Named-Entity Based Sentiment Analysis of Nepali News Media Texts",
"authors": [
{
"first": "Birat",
"middle": [
"Bade"
],
"last": "Shrestha",
"suffix": "",
"affiliation": {
"laboratory": "Information and Language Processing Research Lab",
"institution": "Kathmandu University",
"location": {
"settlement": "Dhulikhel",
"region": "Kavre",
"country": "Nepal"
}
},
"email": ""
},
{
"first": "Krishna",
"middle": [],
"last": "Bal",
"suffix": "",
"affiliation": {
"laboratory": "Information and Language Processing Research Lab",
"institution": "Kathmandu University",
"location": {
"settlement": "Dhulikhel",
"region": "Kavre",
"country": "Nepal"
}
},
"email": "[email protected]"
},
{
"first": "",
"middle": [],
"last": "Bal",
"suffix": "",
"affiliation": {
"laboratory": "Information and Language Processing Research Lab",
"institution": "Kathmandu University",
"location": {
"settlement": "Dhulikhel",
"region": "Kavre",
"country": "Nepal"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Due to the general availability, relative abundance and wide diversity of opinions, news Media texts are very good sources for sentiment analysis. However, the major challenge with such texts is the difficulty in aligning the expressed opinions to the concerned political leaders as this entails a non-trivial task of named-entity recognition and anaphora resolution. In this work, our primary focus is on developing a Natural Language Processing (NLP) pipeline involving a robust Named-Entity Recognition followed by Anaphora Resolution and then after alignment of the recognized and resolved named-entities, in this case, political leaders to the correct class of opinions as expressed in the texts. We visualize the popularity of the politicians via the time series graph of positive and negative sentiments as an outcome of the pipeline. We have achieved the performance metrics of the individual components of the pipeline as follows: Part of speech tagging-93.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Due to the general availability, relative abundance and wide diversity of opinions, news Media texts are very good sources for sentiment analysis. However, the major challenge with such texts is the difficulty in aligning the expressed opinions to the concerned political leaders as this entails a non-trivial task of named-entity recognition and anaphora resolution. In this work, our primary focus is on developing a Natural Language Processing (NLP) pipeline involving a robust Named-Entity Recognition followed by Anaphora Resolution and then after alignment of the recognized and resolved named-entities, in this case, political leaders to the correct class of opinions as expressed in the texts. We visualize the popularity of the politicians via the time series graph of positive and negative sentiments as an outcome of the pipeline. We have achieved the performance metrics of the individual components of the pipeline as follows: Part of speech tagging-93.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent times, the way the general public acquires news has drastically changed. Traditional news sources such as television, radio and printed newspapers are in a steady decline in terms of use and consumption. Nowadays, most of the people, especially those from the younger generations read the news on the web either from social media or online news portals. As the internet has become more accessible and available to the general public, we have seen a rapid growth in the number of online news portals and blog sites. Almost all of the established media houses have gone online thereby maintaining online news portals besides the hard copy versions. This makes news media texts a very good resource for sentiment analysis in the socio-political domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In most countries the popularity of the politicians (especially, the head of state) is tracked by the media houses, independent organizations as well as the party the politicians are affiliated to. The approval ratings show how popular or unpopular a politician is in the view of the general public. Politicians make change to their policies as well as their public persona so that their approval ratings can improve. Positive approval rating can even suggest the likelihood of a politician wining an election. Approval ratings are generally calculated by conducting opinion polls in a particular sample population. Another way of calculating the approval rating can be by finding out what kind of views are being expressed about a politician in printed news media articles. In the context of our country, such approval ratings are not available. This work presents a way of calculating popularity ratings and thus helping to determine how popular or unpopular a Nepali politician is by analyzing the news media texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task of analyzing sentiments in the news media texts has its own set of challenges. A news article may contain sentiments or opinions expressed over more than one politician. Worse, even a sentence might refer to sentiments expressed over multiple politicians. From this perspective, the task demands that the namedentity or the political leader being referred to is accurately identified and resolved in terms of the pronominal references used in the text before moving to the task of aligning the corresponding sentiment to the named-entity. Once the two phase task of named-entity resolution and sentiment alignment is complete, we present the results in the form of a time-series popularity or trending graph. Such a representation would be of interest to a wide range of target audience -the political leader himself/herself, his/her political affiliation, the general public and media houses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automating the task is not simple or trivial as the whole process is quite technically involved and requires a series of NLP sub-tasks to be accomplished before we finally reach the end of the pipeline. We describe the pipeline in Section 3 of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In terms of accomplishing the work, our major contributions can be listed as follows ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There have been a few research works on Sentiment Analysis in Nepali texts. In one of the first works on Sentiment Analysis for the Nepali Language, Gupta and Bal developed a lexical resource namely the Bhavanakos (Gupta & Bal, 2015) . Yadav & Pant (2014) used a machine learning approach to determine if movie reviews were positive or negative. Their architecture consisted of 3 major components, viz., Pre-Processing, Feature Extraction and Classification. In the Pre-Processing phase, they performed the steps such as whitespace and special character removal, abbreviations expansion, stemming, stop word removal, negation handling, PoS tagging, namedentity recognition etc. To train the model they extracted features such as TF-IDF, positive word count, negative word count, presence of polar words etc. Using these parameters they were able to train a Na\u00efve Bayes classifier and reported a precision of 79.23%, a recall of 78.57% and Fscore of 78.90%. Their data set consisted of 500 samples: 250 positive and 250 negative (Yadav & Pant, 2014) .",
"cite_spans": [
{
"start": 214,
"end": 233,
"text": "(Gupta & Bal, 2015)",
"ref_id": "BIBREF2"
},
{
"start": 236,
"end": 255,
"text": "Yadav & Pant (2014)",
"ref_id": "BIBREF0"
},
{
"start": 1028,
"end": 1048,
"text": "(Yadav & Pant, 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In an undertaking similar to sentiment analysis, Shahi & Pant (2018) used different text mining techniques to address the text classification problem for Nepali news media text. The researchers compared the accuracy of three machine algorithms: Na\u00efve Bayes, Neural Network and Support Vector Machine to classify text according to their content. Their architecture consisted of 3 major components: Pre-Processing, Feature Extraction and Machine Learning. In the Pre-Processing phase they performed the steps such as tokenization, special symbol and number removal, stop word removal and word stemming. To extract the feature from the dataset they used TF-IDF. The researchers have used two instances of the SVM: SVM with Linear Kernel and SVM Radial Basis Function kernels. SVM is a binary classifier but the problem of text classification is a multi-class problem. In order to mitigate this issue the researchers adopted a one-vs.-rest approach. The Neural Network they used was a simple dense backpropagation multilayered perceptron with stochastic gradient descent optimization. The researchers used a five-fold validation method. Their result showed that SVM with RBF kernel outperformed the other three algorithms with an average accuracy of 74.65%. Linear SVM had an average accuracy of 74.62%, Multilayer Perceptron Neural Networks had 72.99% and lastly the Naive Bayes had an accuracy of 68.31% (Shahi & Pant, 2018) .",
"cite_spans": [
{
"start": 1402,
"end": 1422,
"text": "(Shahi & Pant, 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Researchers have used sentiment analysis techniques to extract opinion from News Media texts as well. Thapa & Bal (2016) compared the accuracy of Machine Learning algorithms to classify sentiments expressed in Nepali news media text. They tested three algorithms: Support Vector Machine, Multinomial Naive Bayes and Logistic Regression using a 5-fold cross validation method. Their dataset consisted of 384 book and movie reviews. In the dataset, 179 were positive and 205 were negative sentences. To extract the feature from the dataset, they used four methods: Bag-of-Words, Bag-of-Words (with stopwords removed), TF-IDF and TF-IDF (with stopwords removed). The results obtained from their experiments showed that the F1-score of Multinomial Naive Bayes Algorithm was higher when taking TF-IDF (with stopwords) (Thapa & Bal, 2016) .",
"cite_spans": [
{
"start": 813,
"end": 832,
"text": "(Thapa & Bal, 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In addition to this, Kafle (2019) implemented a sentiment based popularity tracker for Nepali politicians. In his research, he used Nepali news media text published in English as his data source. He tracked the popularity of Nepali politicians based on two parameters: growing popularity, diminishing popularity. To track the popularity of a particular politician, he carried out a sentence level sentiment analysis and assigned sentiment scores to each article with respect to that politician. His architecture can be divided into three main phases. The first phase was Named-Entity Extraction where all the named-entities in the articles were extracted. In the second phase, pronomial anaphora were resolved by replacing the pronouns with the named-entities they were referring to. And in the third and final phase sentiments were extracted from the articles tokenized into sentences. To extract the sentiments from the tokens they used a lexicon and rule-based sentiment analysis tool called Valence Aware Dictionary and sEntiment Reasoner (VADER).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Nepali language is a morphologically rich and complex language. In order to mine opinions from Nepali texts, the text classifier being used should be able to incorporate specific language features before classifying the text (Shahi & Pant, 2018). Different techniques have been used in order to extract sentiment from Nepali text. Researchers have employed learning based as well as rule and lexicon based approach for sentiment classification. To extract features from Nepali text, techniques like TF-IDF, Bags-of-Words etc are used. But the problems with these techniques are that they do not consider the context of the words. It has also been observed that removing stop words has no significant effect on the evaluation metrics and the performance of the classifier (Thapa & Bal, 2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We propose the following framework to address the given research problem, which consists of a pipeline of six components:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "1. Data Collection 2. Pre-Processing 3. Parts-of-Speech Tagging 4. Named-Entity Recognition 5. Anaphora Resolution 6. Sentiment Analysis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "A multi-threaded scraping framework was developed in order to facilitate the data collection process. The data was gathered by scrapping news articles from four online news portals. The scrapping framework scraped the news articles from online news portals and saved them in a data repository. The news portals were chosen based upon their influence as a mainstream news source in the Nepali news media. The online portals from which the articles were scrapped are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1"
},
{
"text": "i. Kantipur Daily 1 ii.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1"
},
{
"text": "NagarikNews 2 iii.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1"
},
{
"text": "Online Khabar 3 iv.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1"
},
{
"text": "Setopati 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1"
},
{
"text": "The Pre-Processing component consists of two sub-components: Article Cleaning and Article Lemmatizing. First of all, badly encoded characters are removed from the articles. The scrapping framework itself is equipped with the functionality to strip the unnecessary HTML tags and JavaScript codes. Unnecessary punctuation symbols are also removed from the articles and the articles are prepared for the next phase by tokenizing them. Secondly, the Article Lemmatizing subcomponent takes care of the removal of certain suffixes at the end of each tokenized word such as \u0932\u093e\u0908, \u092e\u093e, \u0939\u0930\u0942, \u0915\u094b etc. Need to note that most inflections in the Nepali language are caused by the postpositions that are placed after nouns, verbs etc. The list of post-positions was obtained from sanjaalcorps 5 .The Article Lemmatizing subcomponent splits the root word and the suffixes but does not remove the suffix altogether as it is required in the subsequent phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-Processing",
"sec_num": "3.2"
},
{
"text": "The Part-of-Speech (PoS) tagging component assigns lexical category to each word in a text. A Part of Speech is a category of words that have similar grammatical properties. The most common PoS for English language are noun, verb, adjective, adverb, pronoun, preposition, conjunction, interjection, numeral, article or determiner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Tagging",
"sec_num": "3.3"
},
{
"text": "We trained the statistical based Trigrams 'n\" Tagger (TnT) tagger with the PoS tagged corpus of Nepali available at the official website of the Center for Language Engineering 6 . The TnT tagger is based on the work of Thorsten Brants (Brants, 2002) . If the tagger encounters a new word that has not been trained before, it will tag that word as 'Unk' or Unknown. In order to address the 'Unk' part of speech category, we added a few articles from our corpus into the training dataset.",
"cite_spans": [
{
"start": 235,
"end": 249,
"text": "(Brants, 2002)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Tagging",
"sec_num": "3.3"
},
{
"text": "A 10-fold validation method was used to validate the model. The results showed that the trained model had an average accuracy of 90.05%, precision of 96.55%, recall of 91.3% and F1-score of 93.06. It was also found that lemmatizing the text increased the accuracy of the model by 10%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Tagging",
"sec_num": "3.3"
},
{
"text": "A Named-Entity refers to a real-world object such as the name of the person, organization, location etc. In Named-Entity Recognition, such named entities are identified and tagged in text. A Named-Entity Recognition classifier reads the texts, identifies the named entities and classifies them accordingly. For the NER component, we train the StandardNERTagger. This NER classifier is based on the work of Finkel et al (Finkel et al., 2005) . It uses an advanced statistical learning algorithm and therefore is relatively computationally expensive. For this work, we used the dataset made available by (Singh et al., 2019) . The dataset follows the standard CoNLL-2003 IO format (Sang and Meulder, 2003) . The dataset consists of two tab separated fields: the word, named entity tag with one word per line. In order to enhance the performance of the tagger, we extended the dataset with data from our own corpus. We again used a 10-fold validation method to validate the model. The model had an average F1-score of 86%. We present the Precision, Recall and F1-scores for the namedentity tags in Table no 1. ",
"cite_spans": [
{
"start": 419,
"end": 440,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF4"
},
{
"start": 602,
"end": 622,
"text": "(Singh et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 658,
"end": 688,
"text": "CoNLL-2003 IO format (Sang and",
"ref_id": null
},
{
"start": 689,
"end": 703,
"text": "Meulder, 2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1095,
"end": 1103,
"text": "Table no",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Named-Entity Recognition",
"sec_num": "3.4"
},
{
"text": "Anaphora resolution refers to the task of correctly resolving the pronominal references in terms of the referred Named-Entity in texts. The referring word is called anaphora and the referenced word is called antecedent. We implemented a rule-based method based on the Lappin and Leass algorithm to resolve the anaphora on Nepali news media text. The algorithm uses a simple weighting scheme that balances the effects of recency and sentence structure. The algorithm works by adding a discourse variable for each new entity mentioned in the discourse. The algorithm calculates the degree of salience for each entity by summing the weights from a table of salience factors (Lappin and Leass, 1994) .",
"cite_spans": [
{
"start": 671,
"end": 695,
"text": "(Lappin and Leass, 1994)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Anaphora Resolution",
"sec_num": "3.5"
},
{
"text": "This component uses the pronouns and other parts of speech tagged by the PoS tagging component and the Named-Entities tagged by Named-Entity Recognition Component. In this research work, we resolve the personal pronouns such as \u0909\u0928\u0940, \u0909\u0928, \u0909\u0939\u093e\u093e\u0901 , \u0909\u0939\u093e, \u0909, \u090a, \u0935\u0939\u093e\u093e\u0901 only. For testing, the first five sentences of 292 news articles were used thus making it a total of 1460 sentences. Out of the 1460, 600 sentences had no named entities whereas 589 had named entities in them and 271 sentences had pronouns. Out of the 271 pronouns, 237 were resolved correctly and 34 were resolved incorrectly. The accuracy of the algorithm was found to be 87.45 %.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anaphora Resolution",
"sec_num": "3.5"
},
{
"text": "In this section, we discuss how the sentiment analysis model was developed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis",
"sec_num": "3.6"
},
{
"text": "Unfortunately, a publicly available sentence level sentiment annotated dataset for Nepali language is not available. Most previous research deals with document level sentiment analysis so they are of very little use for this work. The only option that remained was to manually label the dataset. A total of 3490 sentences were labeled manually. The sentences were classified into one of the following two classes; Positive and Negative. Out of these 3490 sentences, 2676 were positive sentences and 814 were negative. To make the dataset balanced, we included an equal number of positive (814) and negative (814) sentences in the training dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.6.1"
},
{
"text": "In Natural Language Processing, representing words in multi-dimensional vector space can improve the performance of a learning based algorithm (Mikolov et al., 2013) . In this work, we have used Word2Vec (Mikolov et al., 2013) and FastText (Bojanowski et al., 2016) to embed the words primarily for the purpose of feature extraction for the next component in the pipeline, i.e., Sentiment Classification. There exist pretrained embedding models for the Nepali language texts (Lamsal, 2019) . However, to ensure better coverage, we went for embedding models based on text corpus developed for this research work. The trained model represents each word from the corpus in a 300 dimensional vector space thereby making it 19339 unique words. Four models, Word2Vec with CBOW, Word2Vec with skipgram, FastText with CBOW and FastText with skipgram were trained for testing purposes. To extract the necessary feature vector, firstly the words in the sentences were embedded, and then we averaged the embedding vector of each word in a given sentence. Finally this feature vector was used for sentiment classification. , From the initial experimentations, we found out that the classifiers were performing better when the words were embedded using the skipgram parameter. The results obtained are presented in ",
"cite_spans": [
{
"start": 143,
"end": 165,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 204,
"end": 226,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 240,
"end": 265,
"text": "(Bojanowski et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 475,
"end": 489,
"text": "(Lamsal, 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embedding",
"sec_num": "3.6.2"
},
{
"text": "For classifying the sentiments in this work, the initial plan was to use Recurrent Neural Network ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Classification",
"sec_num": "3.6.3"
},
{
"text": "In order to test the performance of the sentiment classifiers, 10-fold cross validation method was used. From the experiments conducted, we found that Support Vector Machine (SVM) with Word2vec embedding (skipgram) had the overall highest performance metrics with an accuracy of 80.15%, precision of 80.4%, recall of 80.2% and F1-score of 80.2%. All the averaged results obtained from the experiments are presented in Table no 3. For visualization, popularity trends of three most prominent Nepali politicians from 2018/04 to 2018/09 were plotted. The popularity graphs are shown in Figure no 2.",
"cite_spans": [],
"ref_spans": [
{
"start": 418,
"end": 426,
"text": "Table no",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We implemented a named-entity based sentiment analysis framework in this research work in order to distill the outlook expressed towards the politicians in the news media. Initially, the names of the politicians in the news articles were identified and the pronominal expressions that were referring to the names were resolved. Sentences with the name of the politicians were then extracted and classified according to the sentiment expressed towards them. Support Vector Machine had the overall highest performance metrics for classifying the sentiments expressed in the sentences. We experimented with different embedding techniques and found that Word2Vec with skipgram was the optimal option for feature extraction. This combination of classifier and embedding technique was used to classify the sentences in the articles. Finally, as an application of our research work, we presented the results in the form of a time-series popularity graph or a trending graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "For each component of the pipeline, we were able to achieve a relatively higher performance metric. Nevertheless, there are some limitations to our work. The components of the proposed framework are based on probabilistic and rule based models, which underperform compared to the neural network models. We could not go for the latter because our dataset is not large enough. Similarly, the anaphora resolution component only resolved pronouns. Other expressions referring to an entity were ignored all together. Furthermore, we have not dealt with the opinion holder or the target explicitly in terms of opinions in this work although named-entities can also be related to the target in many aspects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The performance of the overall framework can be further enhanced by using more advanced variants of Neural Networks, which are specialized for Natural Language Processing tasks. Different variants of RNN have been used for Parts-of-Speech tagging as well as Named-Entity Extraction with considerable accuracy and success for other languages. These Neural Networks can be used for sentiment classification as well, given we have sufficient data. So, one aspect of future enhancement to the work would definitely be increasing the dataset. There are other areas of improvements to this work. For anaphora resolution, Graph Based Neural Networks seem to be better as graph based neural networks are more effective in handling nonlinear data. Furthermore, latest embedding techniques such as BERT, ELMO, etc. can be used to more accurately embed the context of words within a sentence and thus get better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://ekantipur.com/ 2 https://nagariknews.nagariknetwork.com/ 3 https://www.onlinekhabar.com/ 4 https://www.setopati.com/ 5 https://github.com/sanjaalcorps/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cle.org.pk/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Sentiment Analysis on Nepali Movie Reviews using Machine Learning",
"authors": [
{
"first": "Abhimanu",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "Ashok",
"middle": [
"K"
],
"last": "Pant",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal for Research and Development",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhimanu Yadav and Ashok K. Pant. 2014. Sentiment Analysis on Nepali Movie Reviews using Machine Learning. Journal for Research and Development.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Detecting Sentiment in Nepali texts: A bootstrap approach for Sentiment Analysis of texts in the Nepali language",
"authors": [
{
"first": "Chandan",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"1-4.10.1109/CCIP.2015.7100739"
]
},
"num": null,
"urls": [],
"raw_text": "Chandan Gupta and Bal Krishna Bal. 2015. Detecting Sentiment in Nepali texts: A bootstrap approach for Sentiment Analysis of texts in the Nepali language. 1-4. 10.1109/CCIP.2015.7100739.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Introduction to the conll-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 -Vo lume 4, CONLL '03",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 -Vo lume 4, CONLL '03, pages 142-147, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43nd Annual Meeting of the Association for Computational Linguistics (ACL 2005)",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling. Proceedings of the 43nd Annual Meeting of the Association for Computational Linguistics (ACL 2005), pp. 363- 370.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Popularity Tracking and Trend Analyses of Political Figures Based on Online News Data (Master's thesis)",
"authors": [
{
"first": "",
"middle": [],
"last": "Kamal Kafle",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kamal Kafle. 2019. Popularity Tracking and Trend Analyses of Political Figures Based on Online News Data (Master's thesis). Kathmandu University",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Classifying sentiments in Nepali subjective texts. 1-6",
"authors": [],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/IISA.2016.7785374"
]
},
"num": null,
"urls": [],
"raw_text": "Lal Bahadur Reshmi Thapa and Bal Krishna Bal. 2016. Classifying sentiments in Nepali subjective texts. 1-6. 10.1109/IISA.2016.7785374.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Named Entity Recognition for Nepali Language",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Oyesh Mann Singh",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Padia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oyesh Mann Singh, Ankur Padia, and Anupam Joshi. 2019. Named Entity Recognition for Nepali Language.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXivpreprintarXiv:1607.04606"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprintarXiv:1607.04606.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "300-Dimensional Word Embeddings for Nepali Language",
"authors": [
{
"first": "Rabindra",
"middle": [],
"last": "Lamsal",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Dataport",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.21227/dz6s-my90"
]
},
"num": null,
"urls": [],
"raw_text": "Rabindra Lamsal. 2019. 300-Dimensional Word Embeddings for Nepali Language. IEEE Dataport.http://dx.doi.org/10.21227/dz6s-my90",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An algorithm for pronomial anaphora resolution",
"authors": [
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": ""
},
{
"first": "Herbert",
"middle": [
"J"
],
"last": "Leass",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "",
"pages": "535--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shalom Lappin, Herbert J. Leass. 1994. An algorithm for pronomial anaphora resolution. Computational Linguistics 20, 535-561",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Nepali news classification using Na\u00efve Bayes, Support Vector Machines and Neural Networks. 1-5",
"authors": [
{
"first": "Bahadur",
"middle": [],
"last": "Tej",
"suffix": ""
},
{
"first": "Ashok",
"middle": [],
"last": "Shahi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar Pant",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICCICT.2018.8325883"
]
},
"num": null,
"urls": [],
"raw_text": "Tej Bahadur Shahi and Ashok Kumar Pant. 2018. Nepali news classification using Na\u00efve Bayes, Support Vector Machines and Neural Networks. 1- 5. 10.1109/ICCICT.2018.8325883.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "TnT: A Statistical Part-of-Speech Tagger",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"ANLP.10.3115/974147.974178"
]
},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants. 2002. TnT: A Statistical Part-of- Speech Tagger. ANLP. 10.3115/974147.974178.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. CoRR, abs/1310.4546.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": ": a) Developed a Part-of-Speech (PoS) tagger for Nepali b) Developed a Named-Entity Recognition (NER) classifier for Nepali c) Developed a rule-based Anaphora Resolution module for Nepali d) Manually labelled a sentence level Sentiment Corpus of 3490 sentences from Nepali News Media texts e) Developed a Machine Learning based Sentiment Classifier based on the Sentiment Corpus"
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Figure 1: System Architecture"
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Time Series Popularity Analysis of Nepali Politicians"
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>Model</td><td>Parameter</td><td>F1</td></tr><tr><td>Word2Vec</td><td colspan=\"2\">Continuous Bag of Words 70</td></tr><tr><td/><td>Skip Gram</td><td>80.2</td></tr><tr><td>FastText</td><td colspan=\"2\">Continuous Bag of Words 66.4</td></tr><tr><td/><td>Skip Gram</td><td>78.7</td></tr></table>",
"num": null,
"text": ".",
"html": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>: Performance Evaluation of Embedding</td></tr><tr><td>Methods</td></tr></table>",
"num": null,
"text": "",
"html": null
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Performance evaluation of SVM, Random Forrest and Decision Tree",
"html": null
}
}
}
}