ACL-OCL / Base_JSON /prefixT /json /trac /2020.trac-1.19.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:52:17.212827Z"
},
"title": "Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020",
"authors": [
{
"first": "Sudhanshu",
"middle": [],
"last": "Mishra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Kanpur",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Shivangi",
"middle": [],
"last": "Prasad",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana Champaign",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Shubhanshu",
"middle": [],
"last": "Mishra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana Champaign",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present our team '3Idiots' (referred as 'sdhanshu' in the official rankings) approach for the Trolling, Aggression and Cyberbullying (TRAC) 2020 shared tasks. Our approach relies on fine-tuning various Transformer models on the different datasets. We also investigated the utility of task label marginalization, joint label classification, and joint training on multilingual datasets as possible improvements to our models. Our team came second in English sub-task A, a close fourth in the English sub-task B and third in the remaining 4 sub-tasks. We find the multilingual joint training approach to be the best trade-off between computational efficiency of model deployment and model's evaluation performance. We open source our approach at https://github.com/socialmediaie/TRAC2020.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present our team '3Idiots' (referred as 'sdhanshu' in the official rankings) approach for the Trolling, Aggression and Cyberbullying (TRAC) 2020 shared tasks. Our approach relies on fine-tuning various Transformer models on the different datasets. We also investigated the utility of task label marginalization, joint label classification, and joint training on multilingual datasets as possible improvements to our models. Our team came second in English sub-task A, a close fourth in the English sub-task B and third in the remaining 4 sub-tasks. We find the multilingual joint training approach to be the best trade-off between computational efficiency of model deployment and model's evaluation performance. We open source our approach at https://github.com/socialmediaie/TRAC2020.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The internet has become more accessible in recent years, leading to an explosion in content being produced on social media platforms. This content constitutes public views, and opinions. Furthermore, social media has become an important tool for shaping the socio-economic policies around the world. This utilization of social media by public has also attracted many malicious actors to indulge in negative activities on these platforms. These negative activities involve, among others, misinformation, trolling, displays of aggression, as well as cyberbullying behaviour (Mishra et al., 2014) . These activities have led to derailment and disruption of social conversation on these platforms. However, efforts to moderate these activities have revealed the limits of manual content moderation systems, owing to the the scale and velocity of content production. This has allowed more and more platforms to move to automated methods for content moderation. However, simple rule based methods do not work for subjective tasks like hate-speech, trolling, and aggression identification. These limitations have moved the automated content moderation community to investigate the usage of machine learning based intelligent systems which can identify the nuance in language to perform the above mentioned tasks more efficiently. In this work, we utilize the recent advances in information extraction systems for social media data. In the past we have used information extraction for identifying sentiment in tweets (Mishra and Diesner, 2018) (Mishra et al., 2015) , enthusiastic and passive tweets and users (Mishra et al., 2014) (Mishra and Diesner, 2019) , and extracting named entities (Mishra, 2019) (Mishra and Diesner, 2016) . We extend a methodology adopted in our previous work ) on on Hate Speech and Offensive Content (HASOC) identification in Indo-European Languages (Mandl et al., 2019) . In our work on HASOC, we investigated the usage of monolingual and multilingual transformer (Vaswani et al., 2017) models (specifically Bidirectional Encoder Representation from Transformers (BERT) (Devlin et al., 2019) ) for hate speech identification. In this work, we extend our analysis to include a newer variant of transformer model called XLM-Roberta (Conneau et al., 2019) . In this year's TRAC (Ritesh Kumar and Zampieri, 2020) shared tasks, our team '3Idiots' (our team is referred as 'sdhanshu' in the rankings(Ritesh Kumar and Zampieri, 2020)) experimented with fine-tuning different pre-trained transformer networks for classifying aggressive and misogynistic posts. We also investigated a few new techniques not used before, namely, joint multitask multilingual training for all tasks, as well as marginalized predictions based on joint multitask model probabilities. Our team came second in English sub-task A, a close fourth in the English sub-task B and third in the remaining 4 sub-tasks. We open source our approach at https://github.com/socialmediaie/TRAC2020.",
"cite_spans": [
{
"start": 572,
"end": 593,
"text": "(Mishra et al., 2014)",
"ref_id": "BIBREF9"
},
{
"start": 1509,
"end": 1535,
"text": "(Mishra and Diesner, 2018)",
"ref_id": "BIBREF6"
},
{
"start": 1536,
"end": 1557,
"text": "(Mishra et al., 2015)",
"ref_id": null
},
{
"start": 1602,
"end": 1623,
"text": "(Mishra et al., 2014)",
"ref_id": "BIBREF9"
},
{
"start": 1624,
"end": 1650,
"text": "(Mishra and Diesner, 2019)",
"ref_id": "BIBREF7"
},
{
"start": 1683,
"end": 1697,
"text": "(Mishra, 2019)",
"ref_id": "BIBREF12"
},
{
"start": 1698,
"end": 1724,
"text": "(Mishra and Diesner, 2016)",
"ref_id": "BIBREF5"
},
{
"start": 1872,
"end": 1892,
"text": "(Mandl et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 1987,
"end": 2009,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 2093,
"end": 2114,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 2253,
"end": 2275,
"text": "(Conneau et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 2306,
"end": 2331,
"text": "Kumar and Zampieri, 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The shared tasks in this year's TRAC focused on Aggression and Misogynistic content classification(Ritesh Kumar and Zampieri, 2020) , the related work in this field focuses on a more general topic that is hate speech and abusive content detection. The abusive content identification tasks are are challenging due to the lack of large amounts of labeled datasets. The currently available datasets lack variety and uniformity. They are usually skewed towards specific topics in hate speech like racism, sexism. A good description of the various challenges in abusive content detection can be found here. (Vidgen et al., 2019) The recent developments in the field of Natural Language Processing (NLP) have really spearheaded research in this domain. One of the most remarkable developments in NLP was the introduction of transformer models (Vaswani et al., 2017) using different attention mechanisms, which have become state of the art in many NLP tasks beating recurrent neural networks and gated networks. These transformer models can process longer contextual information than the standard RNNs. One of the main state of the art models in many NLP tasks are Bidirectional Encoder Representation from Transformers (BERT) models (Devlin et al., 2019) . The open source transformers library by HuggingFace Inc. (Wolf et al., 2019) has made fine-tuning pre-trained trans-former models easy. In a 2019 task on Hate Speech and Offensive Content (HASOC) identification in Indo-European Languages (Mandl et al., 2019) , we had the opportunity to try out different BERT models . Our models performed really well in the HASOC shared task, achieving first position on 3 of the 8 sub-tasks and being within top 1% for 5 of the 8 sub-tasks. This motivated us to try similar methods in this year's TRAC (Ritesh Kumar and Zampieri, 2020) shared tasks using other transformer models using our framework from HASOC based on the HuggingFace transformers library 1 .",
"cite_spans": [
{
"start": 106,
"end": 131,
"text": "Kumar and Zampieri, 2020)",
"ref_id": "BIBREF13"
},
{
"start": 602,
"end": 623,
"text": "(Vidgen et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 837,
"end": 859,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 1227,
"end": 1248,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 1489,
"end": 1509,
"text": "(Mandl et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "The data-set provided by the organizers consisted of posts taken from Twitter and YouTube. They provided us with training and dev datasets for training and evaluation of our models for three languages, namely English (ENG), Hindi (HIN) and Bengali (IBEN). For both the sub-tasks, the same training and dev data-sets were used with different fine-tuning techniques. The Aggression Identification sub-task (task -A) consisted of classifying the text data into 'Overtly Aggressive' (OAG), 'Covertly Aggressive' (CAG) and 'Non-Aggressive' (NAG) categories. The Misogynistic Aggression Identification sub-task (task -B) consisted of classifying the text data into 'Gendered' (GEN) and 'Non-gendered' (NGEN) categories. For further details about the shared tasks, we refer to the TRAC website and the shared task paper (Ritesh Kumar and Zampieri, 2020) . The data distribution for each language and each sub-task is mentioned in Table 1 ",
"cite_spans": [
{
"start": 821,
"end": 846,
"text": "Kumar and Zampieri, 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 923,
"end": 930,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "Our methods used for the TRAC (Ritesh Kumar and Zampieri, 2020) shared tasks are inspired from our previous work at HASOC 2019 (Mandl et al., 2019) . For the different shared tasks we fine-tuned different pre-trained transformer neural network models using the HuggingFace transformers library.",
"cite_spans": [
{
"start": 38,
"end": 63,
"text": "Kumar and Zampieri, 2020)",
"ref_id": "BIBREF13"
},
{
"start": 127,
"end": 147,
"text": "(Mandl et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4."
},
{
"text": "For all of the sub-tasks we used different pre-trained transformer neural network models. The transformer architecture was proposed in (Vaswani et al., 2017) . It's effectiveness has been proved in numerous NLP tasks like machine translation, sequence classification and natural language generation. A transformer consists of a set of stacked encoders and decoders with different attention mechanisms.",
"cite_spans": [
{
"start": 135,
"end": 157,
"text": "(Vaswani et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer Model",
"sec_num": "4.1."
},
{
"text": "1 https://github.com/huggingface/transformers Like any encoder-decoder model, it takes an input sequence produces a latent representation which is passed on to the decoder which gives an output sequence. A major change in the transformer architectures was that the decoder is supplied with all of the hidden states of the encoder. This helps the model to gain contextual information for even large sequences. To process the texts we utilized the model specific tokenizers provided in the HuggingFace transformers library to convert the texts into a sequence of tokens which are then utilised to generate the features for the model. We utilised similar training procedures like the one used in our HASOC 2019 submission code 2 . We investigated with two variants of transformer models, namely BERT (both monolingual and multilingual) (Devlin et al., 2019) and XLM-Robert (Conneau et al., 2019) . While, for BERT we tested its in English, and multilingual versions, whereas, for XLM-Roberta we tried only the multilingual model. There are many other variants of transformers but we could not try them out because of GPU memory constraints, as these models require GPUs with very large amounts of RAM.",
"cite_spans": [
{
"start": 833,
"end": 854,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 870,
"end": 892,
"text": "(Conneau et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer Model",
"sec_num": "4.1."
},
{
"text": "For the TRAC shared tasks we investigated the following fine-tuning techniques on the different transformer models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning Techniques",
"sec_num": "4.2."
},
{
"text": "\u2022 Simple fine-tuning: In this approach we simply fine tune an existing transformer model for the specific language on the new classification data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning Techniques",
"sec_num": "4.2."
},
{
"text": "\u2022 Joint label training (C): In our approach during the HASOC (Mishra and Mishra, 2019) shared tasks we had to tackle the problem of data sparsity as the different tasks did not have enough data samples, which makes the training of deep learning models very difficult. To tackle this issue, we had combined the labels of the different shared tasks, which enabled us to train a single model for both the tasks. We tried the same approach for TRAC (Ritesh Kumar and Zampieri, 2020) ,although, here both tasks had the same dataset, so this did not result in an increase in the size of the dataset but it did enable us to train a single model capable of handling both the tasks. We combined the labels of the 2 sub-tasks and trained a single model for the classification. The predicted outputs were NAG-GEN, NAG-NGEN, CAG-GEN, CAG-NGEN, OAG-GEN and OAG-NGEN respectively, taking the argmax of the outputs produces the corresponding label for each task. To get the output of the respective tasks is trivial, we just have to separate the labels by the '-' symbol, where the first word corresponds to sub-task A and second word corresponds to sub-task B. The models using this technique are labeled with (C) in the results table below. NGEN) . Hence, we introduce a marignalized post processing of label to get the total probablity assigned to labels of a given subtasks by marignalizing probabilities across all other subtask labels. This can be done very easily by just summing the combined labels of a particular task label, Eg. the probabilities of CAG-GEN and CAG-NGEN can be added to get the probability of the label CAG for sub-task A. This provides a stronger signal for each task label. Then, finally taking the argmax of the marginalized labels of the respective tasks, determines the output label for that task. The models using this technique are labeled with (M) in the results table below. We only use this approach for post-processing the label probabilities of the joint model. In future we plan to investigate using this marginalized approach during the training phase.",
"cite_spans": [],
"ref_spans": [
{
"start": 1228,
"end": 1233,
"text": "NGEN)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fine-Tuning Techniques",
"sec_num": "4.2."
},
{
"text": "\u2022 Joint training of different languages (ALL): This was a technique that we previously did not experiment with in HASOC . Currently we do not have models dedicated to many languages, e.g., there are specific pre-trained BERT (Devlin et al., 2019) models for the English language but no such model for Hindi exists. For those languages, our only choice is to utilize a multilingual or crosslingual model. Furthermore, as the data consisted of social-media posts, which predominantly consists of sentences containing a mix of different languages, we expected the cross-lingual models to perform better than the others. An obvious advantage of using a multi-lingual model is that it can process data from multiple languages, therefore we can train a single model for all of the different languages for each subtask. To do so we combined the datasets of the three languages into a single dataset, keeping track of which text came from which language. This can easily be done by flagging the respective id with the respective Table 3 : Results of sub-task B for each model and each language.",
"cite_spans": [],
"ref_spans": [
{
"start": 1021,
"end": 1028,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Fine-Tuning Techniques",
"sec_num": "4.2."
},
{
"text": "language name. This increases the size of the dataset which is beneficial for training deep learning models. We then fine-tuned the pre-trained multilingual model for our dataset. After training, we can separate the dataset based on their language id. Thus resulting in a single model that is able to classify data from all of the three languages. This can be especially useful in deploying situations as this results in models which are resource friendly. The models using this technique are labeled with (ALL) in the results table below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning Techniques",
"sec_num": "4.2."
},
{
"text": "\u2022 Combining the above three techniques: Finally, we also experimented with combining all of the above three techniques. This results in a single model that can be used for all of the six sub-tasks. Thus, this technique is very efficient in terms of resources used and flexibility. The models using this technique are labeled either (ALL) (M) or (ALL) (C) in the results table below, based on the presence and absence of the marignalization approach, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning Techniques",
"sec_num": "4.2."
},
{
"text": "For training our models we used the standard hyperparameters as mentioned in the transformers models. We used the Adam optimizer (with = 1e \u2212 8) for five epochs, with a training/eval batch size of 32. Maximum allowable length for each sequence is 128. We use a learning rate of 5e \u2212 5 with a weight decay of 0.0 and a max gradient norm of 1.0. All models were trained using Google Colab's 3 GPU runtimes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.3."
},
{
"text": "For each language and each sub-task we experimented with different pre-trained transformer language models present in the transformers library using the various fine-tuning techniques mentioned in the previous section. The different models with their respective dev and training weighted-F1 and macro-F1 scores for sub-task A and sub-Task B are given in Table 2 and Table 4 : Test results of the submitted models lows the following convention to describe the fine-tuning technique used in each experiment. We submitted the top three models based on the weighted-F1 scores on the dev dataset. ",
"cite_spans": [],
"ref_spans": [
{
"start": 354,
"end": 361,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 366,
"end": 373,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Experiments",
"sec_num": "4.4."
},
{
"text": "We were only provided with the weighted-F1 scores of the three submitted models in each task. Hence, only those results are mentioned in Table 4 . Based on the final leaderboard, our models were ranked second in 1/6 task, third in 4/6 tasks, and 4/6 in 1/6 tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "External Evaluation",
"sec_num": "4.5."
},
{
"text": "On the basis of the various experiments conducted using the many transformer models, we see that most of them give a similar performance, being within 2\u22123% of the best model. Exception being the xlm-roberta-base (Liu et al., 2019) model which showed appreciable variations. It performed extremely poorly in the Hindi sub-tasks, but with the joint training with different languages its performance increased significantly. Using the joint label training technique it performed really well in the Bengali sub-tasks whilst also being the bottom performer with the other techniques. One important thing to notice is that the joint training with different language fine-tuning technique (ALL) works really well. It was a consistent top performing model in our experiments, being the best for Bengali. In most cases, we can see that the (ALL) models were better than the base model without any marginalization or joint-training. The marginalization scheme does not change the results much from the joint label training approach. A major benefit of using joint training with different languages, is that is significantly reduces the computational cost of the usage of our models, as we have to only train a single model for multiple tasks and languages, so even if there is a slight performance drop in the (ALL) (C) or (M) model compared to the single model, usage of the (ALL) (C) or (M) model should still be preferred for its computational efficiently. Our team came second in English sub-task A, a close fourth in the English sub-task B and third in the remaining 4 sub-tasks.",
"cite_spans": [
{
"start": 212,
"end": 230,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "From the experiments conducted for this year's TRAC (Ritesh Kumar and Zampieri, 2020) shared tasks, we see that the (ALL) models provide us with an extremely pow-",
"cite_spans": [
{
"start": 60,
"end": 85,
"text": "Kumar and Zampieri, 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "https://github.com/socialmediaie/HASOC2019",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://colab.research.google.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Weighted-F1 rank lang task model run id dev train dev train ENG B bert-base-uncased (M) 4 (M) 0.757 0.920 0.943 0.978 1 xlm-roberta-base (ALL) 9 0.765 0.878 0.941 0.968 2 bert-base-multilingual-uncased (ALL) (M) 9 (M) 0.760 0.939 0.940 0.983 3 bert-base-uncased (C) 4 (C) 0.734 0.914 0.939 0.977 4 bert-base-cased (C) 3 ( ",
"cite_spans": [
{
"start": 314,
"end": 317,
"text": "(C)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Macro-F1",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "approach which gives us a single model capable of classifying texts across all the six shared sub-tasks. We have presented our team 3Idiots's (our team is referred as 'sdhanshu' in the rankings(Ritesh Kumar and Zampieri, 2020)) approach based on fine-tuning monolingual and multi-lingual transformer networks to classify social media posts in three different languages for Trolling, Aggression and Cyber-bullying content",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "approach which gives us a single model capable of classifying texts across all the six shared sub-tasks. We have presented our team 3Idiots's (our team is referred as 'sdhanshu' in the rankings(Ritesh Kumar and Zampieri, 2020)) approach based on fine-tuning monolingual and multi-lingual transformer networks to classify social media posts in three different languages for Trolling, Aggression and Cyber-bullying content. We open source our approach at: https://github.com/socialmediaie/TRAC2020 7. Bibliographical References",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "A",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzm\u00e1n, F., Grave, E., Ott, M., Zettle- moyer, L., and Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional trans- formers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Pa- pers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mandl",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Modha",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dave",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Mandlia",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Patel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 11th annual meeting of the Forum for Information Retrieval Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandl, T., Modha, S., Patel, D., Dave, M., Mandlia, C., and Patel, A. (2019). Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages. In Proceedings of the 11th annual meeting of the Forum for Information Retrieval Evaluation, December.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Semi-supervised Named Entity Recognition in noisy-text",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Diesner",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "203--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mishra, S. and Diesner, J. (2016). Semi-supervised Named Entity Recognition in noisy-text. In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 203-212, Osaka, Japan. The COLING 2016 Orga- nizing Committee.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Detecting the Correlation between Sentiment and User-level as well as Text-Level Meta-data from Benchmark Corpora",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Diesner",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 29th on Hypertext and Social Media -HT '18",
"volume": "",
"issue": "",
"pages": "2--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mishra, S. and Diesner, J. (2018). Detecting the Correla- tion between Sentiment and User-level as well as Text- Level Meta-data from Benchmark Corpora. In Proceed- ings of the 29th on Hypertext and Social Media -HT '18, pages 2-10, New York, New York, USA. ACM Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Capturing Signals of Enthusiasm and Support Towards Social Issues from Twitter",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Diesner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 5th International Workshop on Social Media World Sensors -SIdEWayS'19",
"volume": "",
"issue": "",
"pages": "19--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mishra, S. and Diesner, J. (2019). Capturing Signals of En- thusiasm and Support Towards Social Issues from Twit- ter. In Proceedings of the 5th International Workshop on Social Media World Sensors -SIdEWayS'19, pages 19- 24, New York, New York, USA. ACM Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "3Idiots at HASOC 2019: Fine-tuning Transformer Neural Networks for Hate Speech Identification in Indo-European Languages",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mishra",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 11th annual meeting of the Forum for Information Retrieval Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mishra, S. and Mishra, S. (2019). 3Idiots at HASOC 2019: Fine-tuning Transformer Neural Networks for Hate Speech Identification in Indo-European Languages. In Proceedings of the 11th annual meeting of the Forum for Information Retrieval Evaluation.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Enthusiasm and support: alternative sentiment classification for social movements on social media",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Phelps",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Picco",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Diesner",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 ACM conference on Web science -WebSci '14",
"volume": "",
"issue": "",
"pages": "261--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mishra, S., Agarwal, S., Guo, J., Phelps, K., Picco, J., and Diesner, J. (2014). Enthusiasm and support: alternative sentiment classification for social movements on social media. In Proceedings of the 2014 ACM conference on Web science -WebSci '14, pages 261-262, Bloomington, Indiana, USA, jun. ACM Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Sentiment Analysis with Incremental Human-in-the-Loop Learning and Lexical Resource Customization",
"authors": [],
"year": null,
"venue": "Proceedings of the 26th ACM Conference on Hypertext & Social Media -HT '15",
"volume": "",
"issue": "",
"pages": "323--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sentiment Analysis with Incremental Human-in-the- Loop Learning and Lexical Resource Customization. In Proceedings of the 26th ACM Conference on Hypertext & Social Media -HT '15, pages 323-325, New York, New York, USA. ACM Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multi-dataset-multi-task Neural Sequence Tagging for Information Extraction from Tweets",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mishra",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 30th ACM Conference on Hypertext and Social Media -HT '19",
"volume": "",
"issue": "",
"pages": "283--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mishra, S. (2019). Multi-dataset-multi-task Neural Se- quence Tagging for Information Extraction from Tweets. In Proceedings of the 30th ACM Conference on Hyper- text and Social Media -HT '19, pages 283-284, New York, New York, USA. ACM Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Evaluating aggression identification in social media",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Atul",
"middle": [],
"last": "Kr",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Ojha",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, Atul Kr. Ojha, S. M. and Zampieri, M. (2020). Evaluating aggression identification in social media. In Ritesh Kumar, et al., editors, Proceedings of the Second Workshop on Trolling, Aggression and Cy- berbullying (TRAC-2020), Paris, France, may. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Challenges and frontiers in abusive content detection",
"authors": [
{
"first": "B",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Margetts",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "80--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vidgen, B., Harris, A., Nguyen, D., Tromble, R., Hale, S., and Margetts, H. (2019). Challenges and frontiers in abusive content detection. In Proceedings of the Third Workshop on Abusive Language Online, pages 80-93, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Huggingface's transformers: Stateof-the-art natural language processing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., and Brew, J. (2019). Huggingface's transformers: State- of-the-art natural language processing.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Marginalization of labels (M): While using the previous method, in HASOC we just took the respective probability of the combined label and made our decision on the basis of that probability. A limitation of this approach is that it does not guarentee consistency",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "No label: This represents the simple fine-tuning approach. \u2022 (C): Joint label training \u2022 (M): Marginalization of labels \u2022 (ALL): Joint training of different languages. \u2022 (ALL) (C): Joint training of different languages with joint label training. \u2022 (ALL) (M): Joint training of different languages with joint label training and marginalization of labels.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td>Lang</td><td>task A</td><td/><td>task B</td></tr><tr><td/><td>train dev</td><td>test</td><td>train dev</td><td>test</td></tr><tr><td>ENG</td><td colspan=\"4\">4263 1066 1200 4263 1066 1200</td></tr><tr><td>HIN</td><td>3984 997</td><td colspan=\"2\">1200 3984 997</td><td>1200</td></tr><tr><td colspan=\"2\">IBEN 3826 957</td><td colspan=\"2\">1188 3826 957</td><td>1188</td></tr></table>",
"text": ".",
"type_str": "table",
"html": null,
"num": null
},
"TABREF1": {
"content": "<table/>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF2": {
"content": "<table><tr><td>in relative ranks of labels for that subtasks when</td></tr><tr><td>combined with labels from other subtasks, i.e.</td></tr><tr><td>p(NAG-GEN) &gt; p(CAG-GEN) does not guarentee</td></tr><tr><td>that p(NAG-NGEN) &gt; p(CAG-</td></tr></table>",
"text": "Results of sub-task A for each model and each language.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table><tr><td>lang</td><td colspan=\"2\">task model</td><td colspan=\"2\">weighted-F1</td><td>rank</td><td/><td>Overall</td></tr><tr><td/><td/><td/><td>dev</td><td colspan=\"3\">test dev test</td><td>Rank</td></tr><tr><td colspan=\"2\">ENG A</td><td>bert-base-multilingual-uncased (ALL)</td><td colspan=\"2\">0.798 0.728</td><td>1</td><td>3</td><td>-</td></tr><tr><td/><td/><td>bert-base-uncased (C)</td><td colspan=\"2\">0.795 0.759</td><td>2</td><td>2</td><td>-</td></tr><tr><td/><td/><td>bert-base-uncased (M)</td><td colspan=\"2\">0.795 0.759</td><td>3</td><td>1</td><td>2</td></tr><tr><td/><td/><td>Overall Best Model****</td><td colspan=\"2\">-0.802</td><td>-</td><td>-</td><td>1*</td></tr><tr><td>HIN</td><td>A</td><td>bert-base-multilingual-uncased</td><td colspan=\"2\">0.708 0.778</td><td>1</td><td>3</td><td>-</td></tr><tr><td/><td/><td colspan=\"3\">bert-base-multilingual-uncased (ALL) (C) 0.696 0.779</td><td>2</td><td>1</td><td>3</td></tr><tr><td/><td/><td colspan=\"3\">bert-base-multilingual-uncased (ALL) (M) 0.695 0.778</td><td>3</td><td>2</td><td>-</td></tr><tr><td/><td/><td>Overall Best Model****</td><td colspan=\"2\">-0.812</td><td>-</td><td>-</td><td>1*</td></tr><tr><td colspan=\"2\">IBEN A</td><td>bert-base-multilingual-uncased (ALL)</td><td colspan=\"2\">0.737 0.780</td><td>1</td><td>1</td><td>3</td></tr><tr><td/><td/><td>xlm-roberta-base (M)</td><td colspan=\"2\">0.732 0.772</td><td>2</td><td>2</td><td>-</td></tr><tr><td/><td/><td>xlm-roberta-base (C)</td><td colspan=\"2\">0.731 0.772</td><td>3</td><td>3</td><td>-</td></tr><tr><td/><td/><td>Overall Best Model****</td><td colspan=\"2\">-0.821</td><td>-</td><td>-</td><td>1*</td></tr><tr><td colspan=\"2\">ENG B</td><td>bert-base-uncased (M)</td><td colspan=\"2\">0.978 0.857</td><td>1</td><td>1</td><td>4</td></tr><tr><td/><td/><td>xlm-roberta-base (ALL)</td><td colspan=\"2\">0.968 0.844</td><td>2</td><td>2</td><td>-</td></tr><tr><td/><td/><td colspan=\"3\">bert-base-multilingual-uncased (ALL) (M) 0.983 0.843</td><td>3</td><td>3</td><td>-</td></tr><tr><td/><td/><td>Overall Best Model****</td><td colspan=\"2\">-0.871</td><td>-</td><td>-</td><td>1*</td></tr><tr><td>HIN</td><td>B</td><td>bert-base-multilingual-uncased</td><td colspan=\"2\">0.986 0.837</td><td>1</td><td>3</td><td>-</td></tr><tr><td/><td/><td>bert-base-multilingual-uncased (ALL)</td><td colspan=\"2\">0.994 0.849</td><td>2</td><td>1</td><td>3</td></tr><tr><td/><td/><td colspan=\"3\">bert-base-multilingual-uncased (ALL) (C) 0.962 0.843</td><td>3</td><td>2</td><td>-</td></tr><tr><td/><td/><td>Overall Best Model****</td><td colspan=\"2\">-0.878</td><td>-</td><td>-</td><td>1*</td></tr><tr><td colspan=\"2\">IBEN B</td><td>bert-base-multilingual-uncased (ALL)</td><td colspan=\"2\">0.992 0.927</td><td>1</td><td>1</td><td>3</td></tr><tr><td/><td/><td colspan=\"3\">bert-base-multilingual-uncased (ALL) (M) 0.965 0.926</td><td>2</td><td>2</td><td>-</td></tr><tr><td/><td/><td colspan=\"3\">bert-base-multilingual-uncased (ALL) (C) 0.902 0.925</td><td>3</td><td>3</td><td>-</td></tr><tr><td/><td/><td>Overall Best Model****</td><td colspan=\"2\">-0.938</td><td>-</td><td>-</td><td>1*</td></tr></table>",
"text": "respectively. The table fol-",
"type_str": "table",
"html": null,
"num": null
}
}
}
}