|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:21:05.361359Z" |
|
}, |
|
"title": "Maoqin @ DravidianLangTech-EACL2021: The Application of Transformer-Based Model", |
|
"authors": [ |
|
{ |
|
"first": "Maoqin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Yunnan University", |
|
"location": { |
|
"settlement": "Yunnan", |
|
"country": "P.R. China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes the result of team-Maoqin at DravidianLangTech-EACL2021. The provided task consists of three languages (Tamil, Malayalam, and Kannada), I only participate in one of the language task-Malayalam. The goal of this task is to identify offensive language content of the code-mixed dataset of comments/posts in Dravidian Languages (Tamil-English, Malayalam-English, and Kannada-English) collected from social media. This is a classification task at the comment/post level. Given a Youtube comment, systems have to classify it into Notoffensive, Offensive-untargeted, Offensivetargeted-individual, Offensive-targeted-group, Offensive-targeted-other, or Not-in-indentedlanguage. I use the transformer-based language model with BiGRU-Attention to complete this task. To prove the validity of the model, I also use some other neural network models for comparison. And finally, the team ranks 5th in this task with a weighted average F1 score of 0.93 on the private leader board.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes the result of team-Maoqin at DravidianLangTech-EACL2021. The provided task consists of three languages (Tamil, Malayalam, and Kannada), I only participate in one of the language task-Malayalam. The goal of this task is to identify offensive language content of the code-mixed dataset of comments/posts in Dravidian Languages (Tamil-English, Malayalam-English, and Kannada-English) collected from social media. This is a classification task at the comment/post level. Given a Youtube comment, systems have to classify it into Notoffensive, Offensive-untargeted, Offensivetargeted-individual, Offensive-targeted-group, Offensive-targeted-other, or Not-in-indentedlanguage. I use the transformer-based language model with BiGRU-Attention to complete this task. To prove the validity of the model, I also use some other neural network models for comparison. And finally, the team ranks 5th in this task with a weighted average F1 score of 0.93 on the private leader board.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Offensive language refers to direct or indirect use of verbal abuse, slander, contempt, ridicule, and other means to infringe or damage the dignity, spiritual world, and mental health of others. It will seriously affect the mental state of others, disrupt work, the life and learning order of others, and seriously pollute the public opinion environment of the entire network (Schmidt and Wiegand, 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 376, |
|
"end": 403, |
|
"text": "(Schmidt and Wiegand, 2017)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Due to the development of the Internet and the popularity of anonymous comments, many offensive languages have spread on the Internet and caused trouble to relevant personnel Mahesan, 2019, 2020a,b) . Relevant organizations should take measures to prevent this from happening. It is unrealistic to judge whether online sentences are completely offended by humans. There-fore, mechanical methods must be used to distinguish whether the language is offensive. The task is to directly test whether the system can distinguish offensive language in Dravidian languages. Dravidian languages are a group of languages spoken by 220 million people, predominantly in southern India and northern Sri Lanka, but also in other areas of South Asia. The Dravidian languages were first recorded in Tamili script inscribed on cave walls in Tamil Nadu's Madurai and Tirunelveli districts in the 6th century BCE. The Dravidian languages are closely related languages the are under-resourced (Chakravarthi, 2020).", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 198, |
|
"text": "Mahesan, 2019, 2020a,b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Existing deep learning and pre-training models have achieved good results on other tasks (Zampieri et al., 2019) , so I use the deep learning method to deal with the related task. According to the latest related research progress, the transformer-based language model has become my preferred model. Because the pre-trained and fine-tuned transformersbased models have shown excellent performance in many NLP problems, such as sentiment classification and automatic extraction of text summaries. So I choose ALBERT (Lan et al., 2019) as my basic model in this task. To get a more effective and higher accuracy model, BiGRU combined with attention. To prove the effectiveness of this model, I have also done comparative experiments with other neural networks. In this task, my model is an effective way to perform well. To obtain as much effective information as possible from the limited data, I also use the 5-fold cross-validation method. my model achieves the desired result.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 112, |
|
"text": "(Zampieri et al., 2019)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 514, |
|
"end": 532, |
|
"text": "(Lan et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of this article is structured as follows. Section 2 introduces related work. Model and data preparation are described in Section 3. Experiments and evaluation are described in Section 4. Section 5 describes the results of my work. The conclusions and future work are drawn in Section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There are many competitions about offensive language detection(such as HASOC (Chakravarthi et al., 2020c; Mandl et al., 2020) and TRAC (Kumar et al., 2018)), and many corresponding methods have been produced. People often tend to abstract this task into a text classification task (Howard and Ruder, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 105, |
|
"text": "(Chakravarthi et al., 2020c;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 106, |
|
"end": 125, |
|
"text": "Mandl et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 305, |
|
"text": "(Howard and Ruder, 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Text classification is called extracting features from original text data and predicting the category of text data based on these features. In the past few decades, many models for text classification have been proposed (Qian, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 232, |
|
"text": "(Qian, 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "From the 1960s to the 2010s, text classification models based on shallow learning dominated. Shallow learning means statistical-based models such as Naive Bayes (NB), K Nearest Neighbors (KNN) (Cover and Hart, 1967) and Support Vector Machines (SVM). Compared with earlier rulebased methods, this method has obvious advantages in accuracy and stability. However, these methods still require functional design, which is time-consuming and expensive. In addition, they usually ignore the natural order structure or context information in the text data, which makes learning the semantic information of words difficult. Since the 2010s, text classification has gradually changed from a shallow learning model to a deep learning model. Compared with methods based on shallow learning, deep learning methods avoid the manual design of rules and functions and automatically provide semantically meaningful representations for text mining. Therefore, most of the text classification research work is based on DNN (Yu et al., 2013) , which is a data-driven method with high computational complexity. Few studies have focused on shallow learning models to solve the limitations of computation and data. The shallow learning model speeds up the text classification speed, improves the accuracy, and expands the application range of shallow learning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 215, |
|
"text": "(Cover and Hart, 1967)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1006, |
|
"end": 1023, |
|
"text": "(Yu et al., 2013)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The shallow learning method is a type of machine learning. It learns from data, which is a predefined function that is important to the performance of the predicted value. However, element engineering is an arduous and giant job. Before training the classifier, we need to collect knowledge or experience to extract features from the original text. The shallow learning method trains the initial classifier based on various text features extracted from the original text. For small data sets, under the limita-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Total number train 16010 development 1999 test 2001 tion of computational complexity, shallow learning models generally show better performance than deep learning models. Therefore, some researchers have studied the design of shallow models in specific areas of data replacement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Deep learning consists of multiple hidden layers in a neural network (Aroyehun and Gelbukh, 2018) , has higher complexity, and can be trained on unstructured data. The deep learning architecture can directly learn feature representations from the input without excessive manual intervention and prior knowledge. However, deep learning technology is a data-driven method that usually requires a lot of data to achieve high performance. And the self-attention-based model can bring some interword interpretability to DNN, but the comparison with the shallow model does not explain why and how it works.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 97, |
|
"text": "(Aroyehun and Gelbukh, 2018)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An overall framework and processing pipeline of my solution are shown in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 81, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methodology and Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In my job, I use the ALBERT model as my base model and take BiGRU-Attention behind it. My model is shown in Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 116, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methodology and Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "This is a comment/post level classification task. Given a Youtube comment (Chakravarthi et al., 2020b,a, 2021; Chakravarthi and Muralidaran, 2021), the system has to classify it into one of the five categories mentioned in the Abstract section. For this task, the available sentences including 16010 training sentences, 1999 development sentences, and 2001 testing sentences. The label distribution is very uneven(Not-offensive label accounts 88.4%. The label with the second largest number is not-malayalam, which accounts for only 0.08% of the total. And there are relatively fewer labels in other categories.)The number of sentences for each domain is listed in Table 1. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The ALBERT model belongs to transformer-based language models. The ALBERT model is improved on the basis of Bidirectional Encoder Representations for Transformers(BERT) (Devlin et al., 2018) model. It has designed a parameter reduction method to reduce memory consumption by changing the result of the original embedding parameter P (the product of the vocabulary size V and the hidden layer size H).", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 190, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ALBERT", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "V * H = P \u2192 V * E + E * H = P (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ALBERT", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "E represents the size of the low-dimensional embedding space. In BERT, E = H. While in AL-BERT, H >> E, so the number of parameters will be greatly reduced. At the same time, the self-supervised loss is used to focus on the internal coherence in the construction of sentences. The ALBERT model implements three embedding layers: word embedding, position embedding, and segment embedding. The token embedding layer predicts each word as a fixed-size vector. Position embedding is used to retain position information, use a vector to randomly initialize each position, add model training, and finally obtain an embedding containing position information. Segment embedding helps BERT distinguish between paired input sequences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ALBERT", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The BiGRU-Attention model (Cover and Hart, 1967) is divided into three parts: text vector input layer, hidden layer, and output layer. Among them, the hidden layer consists of three layers: the BiGRU layer, the attention layer, and the Dense layer (fully connected layer). I set the output of the ALBERT model as the input. After receiving the input, it uses the BiGRU neural network layer to extract features of the deep-level information of the text firstly. Secondly, it uses the attention layer to assign corresponding weights to the deep-level information of the extracted text. Finally, the text feature information with different weights is put into the softmax function layer for classification. The structure of the BiGRU-Attention model is shown in Figure 3 . Model ALBERT(Base) train step 2501 learning rate 2e-5 batch size 32 epoch 5 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 48, |
|
"text": "(Cover and Hart, 1967)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 759, |
|
"end": 767, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BiGRU-Attention", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In this task, I use the ALBERT model to pre-train the task. For the ALBERT model, the main hyperparameters I pay attention to are the training step size, batch size and learning rate. The parameters of my model are shown in Table 2 . I have obtained good performance using the ALBERT-BASE. 1 model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 231, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Considering that BiGRU-Attention can capture contextual information well and extract text information features more accurately (Radford et al., 2018) , I add it after AL-BERT. I use the development data set to verify the performance of the models. The standard of judgment is a weighted F1-score, and this standard is the judgment standard used for my task. Table3 lists the results of various models described previously. The best performance is in bold. My model gets the best performance of 0.93. As shown in the table my model can greatly improve the performance and my overall approach achieved 5th place on the final leader board.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 149, |
|
"text": "(Radford et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The output of the classification result is shown in Figure 4 . We can see that the label of is zero. N ot \u2212 Of f ensive labels account for the majority, accounting for 91.15% of the total number of labels. The N ot \u2212 M alayalam labels account for the second most significant 7.5% of the total. Offensive-Untargeted labels are the least, only about 1%. This may be due to data imbalance (N ot \u2212 Of f ensive labels in the training set account for about 88% of the total) resulting in only three categories being identified.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 60, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper, I present my result on Offensive Language Identification in Dravidian Languages-EACL 2021 which includes three tasks of different languages. For this task, I regard it as a multiple classification task, I use the BiGRU-Attention based on the ALBERT model to complete, and my model works very well. I also summarized the possible reasons for classifying only three types of labels. At the same time, I also use some other neural networks for comparative experiments to prove that my model can obtain excellent performance. The result shows that my model ranks 5th in the Malayalam task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Due to the continuous development of the definition of offensive information on the Internet, it is difficult to accurately describe the nature of this information only from the perspective of data mining, which makes it impossible to model this information effectively. In the future, I will use methods based on multidisciplinary discovery to guide model learning. These models are more likely to use limited data to learn more effective models. At the same time, I will also consider whether I can use other transfer learning models to perform better on multi-classification tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Aggression detection in social media: Using deep neural networks, data augmentation, and pseudo labeling", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Aroyehun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Gelbukh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "90--97", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. T. Aroyehun and A. Gelbukh. 2018. Aggression de- tection in social media: Using deep neural networks, data augmentation, and pseudo labeling. Proceed- ings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 90-97.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Leveraging orthographic information to improve machine translation of under-resourced languages", |
|
"authors": [ |
|
{ |
|
"first": "Chakravarthi", |
|
"middle": [], |
|
"last": "Bharathi Raja", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bharathi Raja Chakravarthi. 2020. Leveraging ortho- graphic information to improve machine translation of under-resourced languages. Ph.D. thesis, NUI Galway.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A sentiment analysis dataset for codemixed Malayalam-English", |
|
"authors": [ |
|
{ |
|
"first": "Navya", |
|
"middle": [], |
|
"last": "Bharathi Raja Chakravarthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shardul", |
|
"middle": [], |
|
"last": "Jose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Suryawanshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"Philip" |
|
], |
|
"last": "Sherly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mc-Crae", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--184", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bharathi Raja Chakravarthi, Navya Jose, Shardul Suryawanshi, Elizabeth Sherly, and John Philip Mc- Crae. 2020a. A sentiment analysis dataset for code- mixed Malayalam-English. In Proceedings of the 1st Joint Workshop on Spoken Language Technolo- gies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 177-184, Marseille, France. European Language Resources association.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Findings of the shared task on Hope Speech Detection for Equality, Diversity, and Inclusion", |
|
"authors": [ |
|
{ |
|
"first": "Vigneshwaran", |
|
"middle": [], |
|
"last": "Bharathi Raja Chakravarthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Muralidaran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bharathi Raja Chakravarthi and Vigneshwaran Mural- idaran. 2021. Findings of the shared task on Hope Speech Detection for Equality, Diversity, and Inclu- sion. In Proceedings of the First Workshop on Lan- guage Technology for Equality, Diversity and Inclu- sion. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Corpus creation for sentiment analysis in code-mixed Tamil-English text", |
|
"authors": [ |
|
{ |
|
"first": "Vigneshwaran", |
|
"middle": [], |
|
"last": "Bharathi Raja Chakravarthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruba", |
|
"middle": [], |
|
"last": "Muralidaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"Philip" |
|
], |
|
"last": "Priyadharshini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mc-Crae", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "202--210", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bharathi Raja Chakravarthi, Vigneshwaran Murali- daran, Ruba Priyadharshini, and John Philip Mc- Crae. 2020b. Corpus creation for sentiment anal- ysis in code-mixed Tamil-English text. In Pro- ceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced lan- guages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 202-210, Marseille, France. European Language Re- sources association.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Findings of the shared task on Offensive Language Identification in Tamil, Malayalam, and Kannada", |
|
"authors": [ |
|
{ |
|
"first": "Ruba", |
|
"middle": [], |
|
"last": "Bharathi Raja Chakravarthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navya", |
|
"middle": [], |
|
"last": "Priyadharshini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anand", |
|
"middle": [], |
|
"last": "Jose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Mandl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prasanna", |
|
"middle": [], |
|
"last": "Kumar Kumaresan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Ponnusamy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Hariharan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Sherly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"Philip" |
|
], |
|
"last": "Mc-Crae", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bharathi Raja Chakravarthi, Ruba Priyadharshini, Navya Jose, Anand Kumar M, Thomas Mandl, Prasanna Kumar Kumaresan, Rahul Ponnusamy, Hariharan V, Elizabeth Sherly, and John Philip Mc- Crae. 2021. Findings of the shared task on Offen- sive Language Identification in Tamil, Malayalam, and Kannada. In Proceedings of the First Workshop on Speech and Language Technologies for Dravid- ian Languages. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Overview of the Track on Sentiment Analysis for Dravidian Languages in Code-Mixed Text", |
|
"authors": [ |
|
{ |
|
"first": "Ruba", |
|
"middle": [], |
|
"last": "Bharathi Raja Chakravarthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vigneshwaran", |
|
"middle": [], |
|
"last": "Priyadharshini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shardul", |
|
"middle": [], |
|
"last": "Muralidaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navya", |
|
"middle": [], |
|
"last": "Suryawanshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Jose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Sherly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mccrae", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "In Forum for Information Retrieval Evaluation", |
|
"volume": "2020", |
|
"issue": "", |
|
"pages": "21--24", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3441501.3441515" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bharathi Raja Chakravarthi, Ruba Priyadharshini, Vigneshwaran Muralidaran, Shardul Suryawanshi, Navya Jose, Elizabeth Sherly, and John P. McCrae. 2020c. Overview of the Track on Sentiment Analy- sis for Dravidian Languages in Code-Mixed Text. In Forum for Information Retrieval Evaluation, FIRE 2020, page 21-24, New York, NY, USA. Associa- tion for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Nearest neighbor pattern classification", |
|
"authors": [ |
|
{ |
|
"first": "Hart", |
|
"middle": [], |
|
"last": "Cover", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cover and Hart. 1967. Nearest neighbor pattern classi- fication.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-W", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Universal language model fine-tuning for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Howard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1801.06146" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Univer- sal language model fine-tuning for text classification. arXiv:1801.06146.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Benchmarking aggression identification in social media", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Ojha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Malmasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Zampieri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--11", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Kumar, A. K. Ojha, S. Malmasi, and M. Zampieri. 2018. Benchmarking aggression identification in social media. Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC- 2018), pages 1-11.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Albert: A lite bert for self-supervised learning of language representations", |
|
"authors": [ |
|
{ |
|
"first": "Zhenzhong", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mingda", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piyush", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.11942" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, and Piyush Sharma. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv:1909.11942. Version 6.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Mandl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandip", |
|
"middle": [], |
|
"last": "Modha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anand", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bharathi Raja Chakravarthi ;", |
|
"middle": [], |
|
"last": "Malayalam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hindi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "German", |
|
"middle": [], |
|
"last": "English", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Forum for Information Retrieval Evaluation", |
|
"volume": "2020", |
|
"issue": "", |
|
"pages": "29--32", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3441501.3441517" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Mandl, Sandip Modha, Anand Kumar M, and Bharathi Raja Chakravarthi. 2020. Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil, Malay- alam, Hindi, English and German. In Forum for Information Retrieval Evaluation, FIRE 2020, page 29-32, New York, NY, USA. Association for Com- puting Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A review of the latest text classification 2020-the development of text classification from shallow to deep from", |
|
"authors": [], |
|
"year": 1961, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qu Qian. 2020. A review of the latest text classifi- cation 2020-the development of text classification from shallow to deep from 1961 to 2020.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Improving language understanding with unsupervised learning", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Narasimhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Salimans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing with unsupervised learning. Technical re- port, OpenAI.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A survey on hate speech detection using natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Schmidt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Wiegand", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Fifth International workshop on natural language processing for social media", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Schmidt and M. Wiegand. 2017. A survey on hate speech detection using natural language processing. Proceedings of the Fifth International workshop on natural language processing for social media, (1):1- 10.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Sentiment Analysis in Tamil Texts: A Study on Machine Learning Techniques and Feature Representation", |
|
"authors": [ |
|
{ |
|
"first": "Sajeetha", |
|
"middle": [], |
|
"last": "Thavareesan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sinnathamby", |
|
"middle": [], |
|
"last": "Mahesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "2019 14th Conference on Industrial and Information Systems (ICIIS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "320--325", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICIIS47346.2019.9063341" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2019. Sentiment Analysis in Tamil Texts: A Study on Machine Learning Techniques and Feature Rep- resentation. In 2019 14th Conference on Industrial and Information Systems (ICIIS), pages 320-325.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Sentiment Lexicon Expansion using Word2vec and fastText for Sentiment Prediction in Tamil texts", |
|
"authors": [ |
|
{ |
|
"first": "Sajeetha", |
|
"middle": [], |
|
"last": "Thavareesan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sinnathamby", |
|
"middle": [], |
|
"last": "Mahesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "2020 Moratuwa Engineering Research Conference (MERCon)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "272--276", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/MERCon50084.2020.9185369" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2020a. Sentiment Lexicon Expansion using Word2vec and fastText for Sentiment Prediction in Tamil texts. In 2020 Moratuwa Engineering Re- search Conference (MERCon), pages 272-276.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Word embedding-based Part of Speech tagging in Tamil texts", |
|
"authors": [ |
|
{ |
|
"first": "Sajeetha", |
|
"middle": [], |
|
"last": "Thavareesan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sinnathamby", |
|
"middle": [], |
|
"last": "Mahesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "478--482", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICIIS51140.2020.9342640" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2020b. Word embedding-based Part of Speech tag- ging in Tamil texts. In 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS), pages 478-482.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Feature learning in deep neural networks -studies on speech recognition tasks", |
|
"authors": [ |
|
{ |
|
"first": "Dong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Seltzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinyu", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jui-Ting", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Seide", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1301.3605" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dong Yu, Michael L. Seltzer, Jinyu Li, Jui-Ting Huang, and Frank Seide. 2013. Feature learning in deep neu- ral networks -studies on speech recognition tasks. arXiv:1301.3605.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Predicting the type and target of offensive posts in social media", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Zampieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Malmasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Rosenthal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1902.09666" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Zampieri, S. Malmasi, P. Nakov, and S. Rosenthal. 2019. Predicting the type and target of offensive posts in social media. arXiv:1902.09666.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "An overall frameworkFigure 2: The architecture of the model, where the E[CLS] and E[SEP ] are added at the beginning and end of each instance respectively, which can separate different sentences. The format is as follows: [CLS]+sentence+[SEP ].", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "The structure of the BiGRU-Attention model. The I1, I2...Im represent the output of the ALBERT layer and the R1, R2...Rm represent the output of the BiGRU layer and will be input to the Attention layer.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "Of f ensive \u2212 T argeted \u2212 Insult \u2212 Other, Of f ensive \u2212 T argeted \u2212 Insult \u2212 Individual, and Of f ensive \u2212 T argeted \u2212 Insult \u2212 Group 1 https://huggingface.co/albert-base-v2", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "The classification result", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "The number of sentences in each set.", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "The parameter configuration of ALBERT.", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "Results of comparative experiments.", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |