|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:12:33.271307Z" |
|
}, |
|
"title": "LPS@LT-EDI-ACL2022:An Ensemble Approach about Hope Speech Detection", |
|
"authors": [ |
|
{ |
|
"first": "Yueying", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Liupanshui Normal University", |
|
"location": { |
|
"settlement": "Guizhou", |
|
"country": "P.R. China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The task shared by sponsor about Hope Speech Detection for Equality, Diversity, and Inclusion at LT-EDI-ACL-2022.The goal of this task is to identify whether a given comment contains hope speech or not,and hope is considered significant for the well-being, recuperation and restoration of human life.Our work aims to change the prevalent way of thinking by moving away from a preoccupation with discrimination, loneliness or the worst things in life to building the confidence, support and good qualities based on comments by individuals. In response to the need to detect equality, diversity and inclusion of hope speech in a multilingual environment, we built an integration model and achieved well performance on multiple datasets presented by the sponsor and the specific results can be referred to the experimental results section.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The task shared by sponsor about Hope Speech Detection for Equality, Diversity, and Inclusion at LT-EDI-ACL-2022.The goal of this task is to identify whether a given comment contains hope speech or not,and hope is considered significant for the well-being, recuperation and restoration of human life.Our work aims to change the prevalent way of thinking by moving away from a preoccupation with discrimination, loneliness or the worst things in life to building the confidence, support and good qualities based on comments by individuals. In response to the need to detect equality, diversity and inclusion of hope speech in a multilingual environment, we built an integration model and achieved well performance on multiple datasets presented by the sponsor and the specific results can be referred to the experimental results section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In the age of multimedia information technology, massive network data is a symbol of people's freedom of speech,and these messages contain a lot of positive or negative sentiments.Past research has mostly focused on sentiment analysis, or negative detection of insults,aggression and hate speech 1 (Chakravarthi et al., 2020 (Chakravarthi et al., , 2022b Sampath et al., 2022; Ravikiran et al., 2022; Bharathi et al., 2022; Priyadharshini et al., 2022) . Instead,the goal of this task (Chakravarthi et al., 2022a) shared at LT-EDI 2022-ACL 2022 2 is to determine whether a given comment contains hope speech or not in Tamil, Malayalam, Kannada, English and Spanish. Tamil, Malayalam, and Kannada belongs to Dravidian languages (Subalalitha, 2019; Srinivasan and Subalalitha, 2019; Narasimhan et al., 2018) . Tamil is an official language of the Indian state of Tamil Nadu, the sovereign nations of Sri Lanka and Singapore, and the Union Territory of Puducherry (Sakuntharaj and Mahesan, 2021 , 2017 , 2016 Thavareesan and Mahesan, 2019 , 2020a ,b, 2021 . The Dravidian languages are first attested in the 6th century BCE as Tamili (also called Tamil-Brahmi) script inscribed on the cave walls in the Madurai and Tirunelveli districts of Tamil Nadu (Anita and Subalalitha, 2019b,a; Subalalitha and Poovammal, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 324, |
|
"text": "(Chakravarthi et al., 2020", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 354, |
|
"text": "(Chakravarthi et al., , 2022b", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 355, |
|
"end": 376, |
|
"text": "Sampath et al., 2022;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 377, |
|
"end": 400, |
|
"text": "Ravikiran et al., 2022;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 423, |
|
"text": "Bharathi et al., 2022;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 452, |
|
"text": "Priyadharshini et al., 2022)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 485, |
|
"end": 513, |
|
"text": "(Chakravarthi et al., 2022a)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 727, |
|
"end": 746, |
|
"text": "(Subalalitha, 2019;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 780, |
|
"text": "Srinivasan and Subalalitha, 2019;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 781, |
|
"end": 805, |
|
"text": "Narasimhan et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 961, |
|
"end": 991, |
|
"text": "(Sakuntharaj and Mahesan, 2021", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 992, |
|
"end": 998, |
|
"text": ", 2017", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 999, |
|
"end": 1005, |
|
"text": ", 2016", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1006, |
|
"end": 1035, |
|
"text": "Thavareesan and Mahesan, 2019", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1036, |
|
"end": 1043, |
|
"text": ", 2020a", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1044, |
|
"end": 1052, |
|
"text": ",b, 2021", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1281, |
|
"end": 1313, |
|
"text": "Subalalitha and Poovammal, 2018)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Research should take a positive reinforcement approach.The aim is to change the prevailing mindset by moving away from focusing on discrimination,loneliness or the worst things in life to building confidence,support and good character based on personal comments (Chakravarthi et al., 2022a) .Therefore,we built an ensemble model to detect user-generated comment sentences from the social media platform (YouTube) that contained hope speech or not, and our model achieves good results on relevant data sets 3 .This is a study of a speech of hope that interprets equality, diversity and inclusion in a multilingual environment. We have open-sourced our code implementations on GitHub 4 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 290, |
|
"text": "(Chakravarthi et al., 2022a)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The study found that in the past, people mainly focused on the sentiment analysis of monolingual (English) (A. Al Shamsi et al., 2021) ,or the negative detection of insult, attack and hate speech in mixed or multilingual languages.While there were few studies on the hope speech detection of equality, diversity and inclusion in multilingual environments(as (Ghanghor et al., 2021) ). In particular, studies that use positive reinforcement methods to build people's confidence, support and", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 134, |
|
"text": "(A. Al Shamsi et al., 2021)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 358, |
|
"end": 381, |
|
"text": "(Ghanghor et al., 2021)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "English Spanish Kannada Malayalam Tamil Training Non_hope_speech 20778 499 3241 6205 7872 Hope_speech 1962 491 1699 1668 6327 Development Non_hope_speech 2569 161 408 784 998 Hope_speech 272 169 213 190 757 Test 2843 330 618 1071 1761 Total 28424 1650 6176 9918 17715 (Ghanghor et al., 2021) submitted the result about hope speech detection in Dravidian languages shared task organized by LT-EDI 2021. In the same task, Mahajan et al. (Mahajan et al., 2021) also made contributions.Their approach fine-tunes RoBERTa for Hope Speech detection in English and fine-tune XLM-RoBERTa for Hope Speech detection in Tamil and Malayalam, two low resource Indic languages.Although some people have done pioneering work, the research in this area still needs more energy from researchers, which is why we are working hard to do research and write this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 323, |
|
"text": "(Ghanghor et al., 2021)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 467, |
|
"end": 489, |
|
"text": "(Mahajan et al., 2021)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 293, |
|
"text": "Tamil Training Non_hope_speech 20778 499 3241 6205 7872 Hope_speech 1962 491 1699 1668 6327 Development Non_hope_speech 2569 161 408 784 998 Hope_speech 272 169 213 190 757 Test 2843 330 618 1071 1761 Total 28424 1650 6176 9918", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Class", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The dataset(Chakravarthi, 2020b) is provided by ACL 2022 contains 59,354 comments from the famous online video sharing platform YouTube out of which 28,424 are in English, 1,650 in Spanish, 6,176 in Kannada (Hande et al., 2021) , 9,918 in Malayalam, and 17,715 comments are in Tamil (Table 1) . This is a comment or post level classification task. Given a YouTube comment, we should classify it into 'Hope speech' and 'Not hope speech'. A comment / post may contain more than one sentence but the average sentence length is 1. The annotations are made at a comment / post level 5 , and the test set is not annotated of label. It is observed that the sentence of data is in a 5 https://drive.google.com/file/d/ 1uOxyblVUCOFaofuw56KJKlx-t_nL4mLf/view code-mixed format (a mixture of Native type and Roman type), and contains a lot of @ names, repeated words or letters, useless symbols, expressions, etc.Before feeding the raw tweets to any training stage,we will do a simple data preprocessing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 227, |
|
"text": "(Hande et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 283, |
|
"end": 292, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "1.No translation processing is done for texts code-mixed with native and Roman type and Keep the sentence length at 50.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "2.Remove unwanted information,like: Usernames (annotated as @names),URLs,and useless symbols present in the tweets are removed altogether,while hashtags (annotated as hashtag) are left as it is.But emoticons remain, and they contain in some sense our sentiment expression.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "3.Stopwords processing After the above simple preprocessing, it is directly input to the model for training.In addition, it can be found that the data set is unbalanced, which we will address in future work, and our model does not use any external data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "This section introduces the structure of our model and experimental results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Framework and Experimental Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "All the data we submitted came from the same model framework and the architecture of the proposed system is shown in Figure 1 , which is an ensemble model consisting finallyof three parts.There are LSTM (Greff et al.) , CNN+LSTM (Yenter and Verma) and BiLSTM(?), respectively.Finally, add an attention layer before ensemble the three-part results. LSTM:this part includes an LSTM layer and two Dense layers.Units of LSTM layer are 264, and the activation function used is Tanh. Units and activation functions in the two dense layers are 64, 2 and Tanh and Softmax, respectively.LSTM is a special RNN type that can learn long-term dependency information which increases the complexity of RNN units, models more carefully, has more constraints, makes training easier, and solves the problem of gradient dissipation of RNN.", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 217, |
|
"text": "(Greff et al.)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 229, |
|
"end": 247, |
|
"text": "(Yenter and Verma)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 125, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Framework", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "CNN+LSTM:this section consists of three different layers,a convolution layer, an LSTM layer, and a Dense layer.In the LSTM layer, units=64, the activation function and dense layer were the same as the former(LSTM) model.Convolutional layer have 64 of the filter siensembleze and 3 kernel size ,followed by a global maximum pool layer.In the task of short text analysis, CNN has a significant effect in dealing with this kind of problems due to the limited length of sentences, compact structure and independent expression of meaning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Framework", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "BiLSTM: it consists of a BiLSTM layer, a Convolution layer and a Dense layer.BiLSTM layer contains two parameters (units=128, acti-vation=tanh).The parameters of the convolution layer are units=128,activation=tanh,followed by a global maximum pool layer and global average pool layer.The final dense layer is the same as the two above.It is worth mentioning that the epochs of the three parts are 6, 5 and 7 respectively. All three model use same Optimizers:Adam of learning rate=0.01. Sparse-categorical-crossentropy is used as the loss function.Cross entropy is used to evaluate the difference between the current training probability distribution and the real distribution. It describes the distance between the actual output (probability) and the expected output (probability), that is, the smaller the value of cross entropy, the closer the two probability distributions will be. The difference is that sparse-categorical-crossentropy accepts discrete values. All parameters of the mode shown in the table in Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1014, |
|
"end": 1022, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Framework", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Attention: before ensemble the three models, we used the attention mechanism (Petersen and Posner, 2012) . The introduction of attention mechanism can not only help the model to make better use of the effective information in the input, but also provide some ability to explain the behavior of the neural network model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 104, |
|
"text": "(Petersen and Posner, 2012)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Framework", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In the basic neural network model, \"attention\" is not obtained in the process of decoding,Encoder-Decoder framework transforms input X into semantic representation C,resulting in the translated sequence in which each word takes into account the equal weight of all words in the input. After the attention mechanism is introduced, there are different hidden layer states at different decoding time. Therefore, we use the state of the decoder hidden layer at a certain moment and the state of the encoder at each moment to carry out matching calculation, and get their respective weights. Table 3 : Compare with baseline model Figure 3 : The structure diagram of weight a ij point, the semantic code C is no longer the direct encoding of input sequence X, but the weighted sum of each element according to its importance, as extra attentions, namely formula 1:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 587, |
|
"end": 594, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 625, |
|
"end": 633, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Framework", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "C i = Tx j=0 a ij f (x j )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Model Framework", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In formula (1), parameter i represents the moment, j represents the j th element in the sequence, T x represents the length of the sequence, and f() represents the encoding of element x j . a ij can be seen as a probability reflecting the importance of element h j to C i and can be expressed by Softmax:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Framework", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "a ij = exp(e ij ) Tx k=1 exp(e ik )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Model Framework", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Here e ij just reflects the matching degree between the element to be encoded and other elements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Framework", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "When the matching degree is higher, it indicates that the element has greater influence on it, and the value of a ij is also higher.Therefore, the process of obtaining a ij is shown in Figure 3 : Where, h i represents the conversion function of Encoder, and F(h j ,H i ) represents the matching scoring function of prediction and target. Finally, concatenation the output after assigning attention weight. In the ensemble model, Soft Voting Classifier (Taylor and Kim, 2011 ) method is used: the average probability of the predicted samples of the three models for a certain category is taken as the standard, and the corresponding type with the highest probability is the final predicted result.", |
|
"cite_spans": [ |
|
{ |
|
"start": 452, |
|
"end": 473, |
|
"text": "(Taylor and Kim, 2011", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 193, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Framework", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We have submitted the results for each language(including:English,Spanish,Kannada, Malayalam and Tamil) given by the sponsor, and table 2 shows the detailed results.Which score is given in 6 methods,there are M-Precision,M-Recall, M-F1-score,W-Precision,W-Recall,and W-F1-score,respectively.The table also shows our team submission ranking and the total number of submission teams.Classification system's performance will be measured and ranked in terms of macro averaged Precision, macro averaged Recall and macro averaged F-Score across all the classes. Note: The follow number of rank indicates total of the teams submitted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The data in the table 2 shows that our results are pretty performance in all languages except for Tamil, all teams performance poor in Tamil language. The first ranked team, Ablimet, submitted a M-F1 score of 0.32 and w-F1-score of 0.42. We will find and solve the specific reason in the future work. The results of our ensemble model were further compared with the baseline model in both macro average F1-score and weighted average F1score in same dataset. Table 3 gives details of the corresponding results, where each option has two data points, macro average F1-score on the left and weighted average F1-score on the right.Observation carefully,all baseline models, or any combination of two of them, end up performing worse than our ensemble model", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 458, |
|
"end": 465, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The ensemble model our submitted consisted of three parts:LSTM,CNN+LSTM and BiLSTM. Among them,CNN,to some extent,takes into account the ordering of the words and the context in which each word appears. Using the LSTM model can better capture long distance dependencies. Because LSTM can learn what to remember and what to forget through the training process,but LSTM doesn't take into account the sequential order of words in a sentence.LSTM has problems with ambiguous affective words in finer -grained classification. Therefore, BiLSTM can better capture the bidirectional semantic dependencies, taking into account the reverse information.Finally, on this basis, attention mechanism is introduced to highlight the key information. In other words, by adjusting a series of weight parameters, it can be used to emphasize or select the important information of the target processing object and suppress some irrelevant details, so as to make the classification more accurate.The model we submitted has achieved performance well, but there is still a lot of room for improvement in both pre-processing and model framework design in the future. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://competitions.codalab.org/ competitions/36393/result 4 https://github.com/TroubleGilr/ Hope-Speech-Detection-for-Equality-Diversity-and-Inclu", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "First of all, I would like to thank various technical associations and research institutes for providing research platforms. Secondly, I would like to thank every student volunteer for their dedication,and thank my teacher and partner for their encouragement and support,finally.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Sentiment analysis in english texts", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"Al" |
|
], |
|
"last": "Arwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Reem", |
|
"middle": [], |
|
"last": "Shamsi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Said", |
|
"middle": [], |
|
"last": "Bayari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Salloum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Advances in Science Technology and Engineering Systems Journal", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "1683--1689", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.25046/aj0506200" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arwa A. Al Shamsi, Reem Bayari, and Said Salloum. 2021. Sentiment analysis in english texts. Advances in Science Technology and Engineering Systems Jour- nal, 5:1683-1689.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Prasanna Kumar Kumaresan, and Rahul Ponnusamy. 2022b. Findings of the shared task on Homophobia Transphobia Detection in Social Media Comments", |
|
"authors": [ |
|
{ |
|
"first": "Ruba", |
|
"middle": [], |
|
"last": "Bharathi Raja Chakravarthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thenmozhi", |
|
"middle": [], |
|
"last": "Priyadharshini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"Phillip" |
|
], |
|
"last": "Durairaj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Mccrae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Buitaleer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bharathi Raja Chakravarthi, Ruba Priyadharshini, Then- mozhi Durairaj, John Phillip McCrae, Paul Buitaleer, Prasanna Kumar Kumaresan, and Rahul Ponnusamy. 2022b. Findings of the shared task on Homophobia Transphobia Detection in Social Media Comments. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Dataset for identification of homophobia and transophobia in multilingual YouTube comments", |
|
"authors": [ |
|
{ |
|
"first": "Ruba", |
|
"middle": [], |
|
"last": "Bharathi Raja Chakravarthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Priyadharshini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prasanna", |
|
"middle": [], |
|
"last": "Ponnusamy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kayalvizhi", |
|
"middle": [], |
|
"last": "Kumar Kumaresan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Durairaj", |
|
"middle": [], |
|
"last": "Sampath", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sathiyaraj", |
|
"middle": [], |
|
"last": "Thenmozhi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajendran", |
|
"middle": [], |
|
"last": "Thangasamy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"Phillip" |
|
], |
|
"last": "Nallathambi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mccrae", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2109.00227" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bharathi Raja Chakravarthi, Ruba Priyadharshini, Rahul Ponnusamy, Prasanna Kumar Kumaresan, Kayalvizhi Sampath, Durairaj Thenmozhi, Sathi- yaraj Thangasamy, Rajendran Nallathambi, and John Phillip McCrae. 2021. Dataset for identi- fication of homophobia and transophobia in mul- tilingual YouTube comments. arXiv preprint arXiv:2109.00227.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "IIITK@LT-EDI-EACL2021: Hope speech detection for equality, diversity, and inclusion in Tamil , Malayalam and English", |
|
"authors": [ |
|
{ |
|
"first": "Nikhil", |
|
"middle": [], |
|
"last": "Ghanghor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Ponnusamy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prasanna", |
|
"middle": [], |
|
"last": "Kumar Kumaresan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruba", |
|
"middle": [], |
|
"last": "Priyadharshini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sajeetha", |
|
"middle": [], |
|
"last": "Thavareesan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bharathi Raja", |
|
"middle": [], |
|
"last": "Chakravarthi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "197--203", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikhil Ghanghor, Rahul Ponnusamy, Prasanna Ku- mar Kumaresan, Ruba Priyadharshini, Sajeetha Thavareesan, and Bharathi Raja Chakravarthi. 2021. IIITK@LT-EDI-EACL2021: Hope speech detection for equality, diversity, and inclusion in Tamil , Malay- alam and English. In Proceedings of the First Work- shop on Language Technology for Equality, Diversity and Inclusion, pages 197-203, Kyiv. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Benchmarking of lstm networks", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Greff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Koutn\u00edk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Steunebrink", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Greff, R. K. Srivastava, J Koutn\u00edk, B. R. Steune- brink, and J. Schmidhuber. Benchmarking of lstm networks.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Anbukkarasi Sampath, Kingston Pal Thamburaj, Prabakaran Chandran, and Bharathi Raja Chakravarthi. 2021. Hope speech detection in under-resourced kannada language", |
|
"authors": [ |
|
{ |
|
"first": "Adeep", |
|
"middle": [], |
|
"last": "Hande", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruba", |
|
"middle": [], |
|
"last": "Priyadharshini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adeep Hande, Ruba Priyadharshini, Anbukkarasi Sam- path, Kingston Pal Thamburaj, Prabakaran Chandran, and Bharathi Raja Chakravarthi. 2021. Hope speech detection in under-resourced kannada language.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "TeamUNCC@LT-EDI-EACL2021: Hope speech detection using transfer learning with transformers", |
|
"authors": [ |
|
{ |
|
"first": "Khyati", |
|
"middle": [], |
|
"last": "Mahajan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erfan", |
|
"middle": [], |
|
"last": "Al-Hossami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samira", |
|
"middle": [], |
|
"last": "Shaikh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "136--142", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Khyati Mahajan, Erfan Al-Hossami, and Samira Shaikh. 2021. TeamUNCC@LT-EDI-EACL2021: Hope speech detection using transfer learning with trans- formers. In Proceedings of the First Workshop on Language Technology for Equality, Diversity and In- clusion, pages 136-142, Kyiv. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Porul: Option generation and selection and scoring algorithms for a tamil flash card game", |
|
"authors": [ |
|
{ |
|
"first": "Anitha", |
|
"middle": [], |
|
"last": "Narasimhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aarthy", |
|
"middle": [], |
|
"last": "Anandan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Madhan", |
|
"middle": [], |
|
"last": "Karky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Subalalitha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Journal of Cognitive and Language Sciences", |
|
"volume": "12", |
|
"issue": "2", |
|
"pages": "225--228", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anitha Narasimhan, Aarthy Anandan, Madhan Karky, and CN Subalalitha. 2018. Porul: Option generation and selection and scoring algorithms for a tamil flash card game. International Journal of Cognitive and Language Sciences, 12(2):225-228.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The attention system of the human brain: 20 years after", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Petersen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Posner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Annual Review of Neuroscience", |
|
"volume": "35", |
|
"issue": "1", |
|
"pages": "73--89", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. E. Petersen and M. I. Posner. 2012. The attention system of the human brain: 20 years after. Annual Review of Neuroscience, 35(1):73-89.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Findings of the shared task on Abusive Comment Detection in Tamil", |
|
"authors": [ |
|
{ |
|
"first": "Ruba", |
|
"middle": [], |
|
"last": "Priyadharshini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bharathi Raja Chakravarthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thenmozhi", |
|
"middle": [], |
|
"last": "Subalalitha Chinnaudayar Navaneethakrishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Malliga", |
|
"middle": [], |
|
"last": "Durairaj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kogilavani", |
|
"middle": [], |
|
"last": "Subramanian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shanmugavadivel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Siddhanth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prasanna", |
|
"middle": [], |
|
"last": "Hegde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kumar Kumaresan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2022, |
|
"venue": "Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruba Priyadharshini, Bharathi Raja Chakravarthi, Sub- alalitha Chinnaudayar Navaneethakrishnan, Then- mozhi Durairaj, Malliga Subramanian, Kogila- vani Shanmugavadivel, Siddhanth U Hegde, and Prasanna Kumar Kumaresan. 2022. Findings of the shared task on Abusive Comment Detection in Tamil. In Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Ratnavel Rajalakshmi, Sajeetha Thavareesan, Rahul Ponnusamy, and Shankar Mahadevan. 2022. Findings of the shared task on Offensive Span Identification in code-mixed Tamil-English comments", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manikandan Ravikiran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anand", |
|
"middle": [], |
|
"last": "Bharathi Raja Chakravarthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sangeetha", |
|
"middle": [], |
|
"last": "Kumar Madasamy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sivanesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manikandan Ravikiran, Bharathi Raja Chakravarthi, Anand Kumar Madasamy, Sangeetha Sivanesan, Rat- navel Rajalakshmi, Sajeetha Thavareesan, Rahul Pon- nusamy, and Shankar Mahadevan. 2022. Findings of the shared task on Offensive Span Identification in code-mixed Tamil-English comments. In Pro- ceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A novel hybrid approach to detect and correct spelling in Tamil text", |
|
"authors": [ |
|
{ |
|
"first": "Ratnasingam", |
|
"middle": [], |
|
"last": "Sakuntharaj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sinnathamby", |
|
"middle": [], |
|
"last": "Mahesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "2016 IEEE International Conference on Information and Automation for Sustainability (ICIAfS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICIAFS.2016.7946522" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ratnasingam Sakuntharaj and Sinnathamby Mahesan. 2016. A novel hybrid approach to detect and correct spelling in Tamil text. In 2016 IEEE International Conference on Information and Automation for Sus- tainability (ICIAfS), pages 1-6.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Use of a novel hash-table for speeding-up suggestions for misspelt Tamil words", |
|
"authors": [ |
|
{ |
|
"first": "Ratnasingam", |
|
"middle": [], |
|
"last": "Sakuntharaj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sinnathamby", |
|
"middle": [], |
|
"last": "Mahesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "2017 IEEE International Conference on Industrial and Information Systems (ICIIS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--5", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICIINFS.2017.8300346" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ratnasingam Sakuntharaj and Sinnathamby Mahesan. 2017. Use of a novel hash-table for speeding-up sug- gestions for misspelt Tamil words. In 2017 IEEE International Conference on Industrial and Informa- tion Systems (ICIIS), pages 1-5.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Missing word detection and correction based on context of Tamil sentences using n-grams", |
|
"authors": [ |
|
{ |
|
"first": "Ratnasingam", |
|
"middle": [], |
|
"last": "Sakuntharaj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sinnathamby", |
|
"middle": [], |
|
"last": "Mahesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "2021 10th International Conference on Information and Automation for Sustainability (ICIAfS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "42--47", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICIAfS52090.2021.9606025" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ratnasingam Sakuntharaj and Sinnathamby Mahesan. 2021. Missing word detection and correction based on context of Tamil sentences using n-grams. In 2021 10th International Conference on Information and Automation for Sustainability (ICIAfS), pages 42-47.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Kishor Kumar Ponnusamy, and Santhiya Pandiyan. 2022. Findings of the shared task on Emotion Analysis in Tamil", |
|
"authors": [ |
|
{ |
|
"first": "Ruba", |
|
"middle": [], |
|
"last": "Bharathi Raja Chakravarthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Priyadharshini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kogilavani", |
|
"middle": [], |
|
"last": "Subalalitha Chinnaudayar Navaneethakrishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sajeetha", |
|
"middle": [], |
|
"last": "Shanmugavadivel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sathiyaraj", |
|
"middle": [], |
|
"last": "Thavareesan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Parameswari", |
|
"middle": [], |
|
"last": "Thangasamy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adeep", |
|
"middle": [], |
|
"last": "Krishnamurthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Hande", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Benhur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bharathi Raja Chakravarthi, Ruba Priyadharshini, Subalalitha Chinnaudayar Navaneethakrishnan, Kogilavani Shanmugavadivel, Sajeetha Thavareesan, Sathiyaraj Thangasamy, Parameswari Krishna- murthy, Adeep Hande, Sean Benhur, Kishor Kumar Ponnusamy, and Santhiya Pandiyan. 2022. Findings of the shared task on Emotion Analysis in Tamil. In Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Automated named entity recognition from tamil documents", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Srinivasan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Subalalitha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "2019 IEEE 1st International Conference on Energy, Systems and Information Processing (ICESIP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--5", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R Srinivasan and CN Subalalitha. 2019. Automated named entity recognition from tamil documents. In 2019 IEEE 1st International Conference on Energy, Systems and Information Processing (ICESIP), pages 1-5. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Information extraction framework for Kurunthogai", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Subalalitha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "S\u0101dhan\u0101", |
|
"volume": "44", |
|
"issue": "7", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/s12046-019-1140-y" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. N. Subalalitha. 2019. Information extraction frame- work for Kurunthogai. S\u0101dhan\u0101, 44(7):156.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Automatic bilingual dictionary construction for Tirukural", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Subalalitha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Artificial Intelligence", |
|
"volume": "32", |
|
"issue": "6", |
|
"pages": "558--567", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "CN Subalalitha and E Poovammal. 2018. Automatic bilingual dictionary construction for Tirukural. Ap- plied Artificial Intelligence, 32(6):558-567.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A jackknife and voting classifier approach to feature selection and classification", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Cancer Informatics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. L. Taylor and K. Kim. 2011. A jackknife and voting classifier approach to feature selection and classifica- tion. Cancer Informatics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Sentiment analysis in Tamil texts: A study on machine learning techniques and feature representation", |
|
"authors": [ |
|
{ |
|
"first": "Sajeetha", |
|
"middle": [], |
|
"last": "Thavareesan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sinnathamby", |
|
"middle": [], |
|
"last": "Mahesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "2019 14th Conference on Industrial and Information Systems (ICIIS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "320--325", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICIIS47346.2019.9063341" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2019. Sentiment analysis in Tamil texts: A study on ma- chine learning techniques and feature representation. In 2019 14th Conference on Industrial and Informa- tion Systems (ICIIS), pages 320-325.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Sentiment lexicon expansion using Word2vec and fastText for sentiment prediction in Tamil texts", |
|
"authors": [ |
|
{ |
|
"first": "Sajeetha", |
|
"middle": [], |
|
"last": "Thavareesan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sinnathamby", |
|
"middle": [], |
|
"last": "Mahesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "2020 Moratuwa Engineering Research Conference (MERCon)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "272--276", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/MERCon50084.2020.9185369" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2020a. Sentiment lexicon expansion using Word2vec and fastText for sentiment prediction in Tamil texts. In 2020 Moratuwa Engineering Research Conference (MERCon), pages 272-276.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Word embedding-based part of speech tagging in Tamil texts", |
|
"authors": [ |
|
{ |
|
"first": "Sajeetha", |
|
"middle": [], |
|
"last": "Thavareesan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sinnathamby", |
|
"middle": [], |
|
"last": "Mahesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "478--482", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICIIS51140.2020.9342640" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2020b. Word embedding-based part of speech tag- ging in Tamil texts. In 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS), pages 478-482.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Sentiment analysis in Tamil texts using k-means and k-nearest neighbour", |
|
"authors": [ |
|
{ |
|
"first": "Sajeetha", |
|
"middle": [], |
|
"last": "Thavareesan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sinnathamby", |
|
"middle": [], |
|
"last": "Mahesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "2021 10th International Conference on Information and Automation for Sustainability (ICIAfS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "48--53", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICIAfS52090.2021.9605839" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2021. Sentiment analysis in Tamil texts using k-means and k-nearest neighbour. In 2021 10th International Con- ference on Information and Automation for Sustain- ability (ICIAfS), pages 48-53.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Deep cnn-lstm with combined kernels from multiple branches for imdb review sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Yenter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Verma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "IEEE Annual Ubiquitous Computing, Electronics and Mobile Communication Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Yenter and A. Verma. Deep cnn-lstm with com- bined kernels from multiple branches for imdb re- view sentiment analysis. In IEEE Annual Ubiquitous Computing, Electronics and Mobile Communication Conference.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Ensemble model structure diagrams and related model parameters", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Parameters of the model", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"text": "", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>: Data Distribution</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"text": "The experimental results of our model", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>Language</td><td>English</td><td>Spanish</td><td>Kannada</td><td>Malayalam</td><td>Tamil</td></tr><tr><td>LSTM</td><td colspan=\"5\">0.34/0.67 0.60/0.62 0.32/0.57 0.30/0.64 0.2/0.30</td></tr><tr><td>CNN</td><td colspan=\"3\">0.30/0.59 0.55/0.59 *</td><td colspan=\"2\">0.29/0.55 *</td></tr><tr><td>CNN+LSTM</td><td colspan=\"3\">0.35/0.70 0.62/0.65 *</td><td colspan=\"2\">0.34/0.66 *</td></tr><tr><td>BiLSTM</td><td colspan=\"3\">0.35/0.71 0.61/0.67 *</td><td colspan=\"2\">0.33/0.64 *</td></tr><tr><td>CNN+BiLSTM</td><td colspan=\"3\">0.37/0.75 0.65/0.70 *</td><td colspan=\"2\">0.33/0.66 *</td></tr><tr><td>LSTM+BiLSTM</td><td colspan=\"3\">0.37/0.80 0.70/0.72 *</td><td colspan=\"2\">0.40/0.70 *</td></tr><tr><td colspan=\"6\">Our approach 0.40/0.88 0.76/0.76 0.45/0.72 0.47/0.72 0.31/0.41</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"text": "Bharathi Raja Chakravarthi. 2020b. HopeEDI: A multilingual hope speech detection dataset for equality, diversity, and inclusion. In Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media, pages 41-53, Barcelona, Spain (Online). Association for Computational Linguistics. Bharathi Raja Chakravarthi, Navya Jose, Shardul Suryawanshi, Elizabeth Sherly, and John Philip Mc-Crae. 2020. A sentiment analysis dataset for codemixed Malayalam-English. In Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 177-184, Marseille, France. European Language Resources association. Rahul Ponnusamy, Daniel Garc\u00eda-Baena, and Jos\u00e9 Antonio Garc\u00eda-D\u00edaz. 2022a. Findings of the shared task on Hope Speech Detection for Equality, Diversity, and Inclusion. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion. Association for Computational Linguistics.", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>R Anita and CN Subalalitha. 2019a. An approach to cluster Tamil literatures using discourse connectives. In 2019 IEEE 1st International Conference on En-ergy, Systems and Information Processing (ICESIP), pages 1-4. IEEE.</td></tr><tr><td>R Anita and CN Subalalitha. 2019b. Building discourse parser for Thirukkural. In Proceedings of the 16th International Conference on Natural Language Pro-cessing, pages 18-25.</td></tr><tr><td>B Bharathi, Bharathi Raja Chakravarthi, Subalalitha Chinnaudayar Navaneethakrishnan, N Sripriya, Arunaggiri Pandian, and Swetha Valli. 2022. Find-ings of the shared task on Speech Recognition for Vulnerable Individuals in Tamil. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion. Association for Computational Linguistics.</td></tr><tr><td>Bharathi Raja Chakravarthi. 2020a. HopeEDI: A mul-tilingual hope speech detection dataset for equality, diversity, and inclusion. In Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Me-dia, pages 41-53, Barcelona, Spain (Online). Associ-ation for Computational Linguistics.</td></tr><tr><td>Bharathi Raja Chakravarthi, Vigneshwaran Muralidaran, Ruba Priyadharshini, Subalalitha Chinnaudayar Na-</td></tr><tr><td>vaneethakrishnan, John Phillip McCrae, Miguel \u00c1n-gel Garc\u00eda-Cumbreras, Salud Mar\u00eda Jim\u00e9nez-Zafra, Rafael Valencia-Garc\u00eda, Prasanna Kumar Kumare-san,</td></tr></table>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |