ACL-OCL / Base_JSON /prefixC /json /constraint /2022.constraint-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:12:38.174906Z"
},
"title": "M-BAD: A Multilabel Dataset for Detecting Aggressive Texts and Their Targets",
"authors": [
{
"first": "Omar",
"middle": [],
"last": "Sharif",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Telecommunication Engineering \u03a8 Chittagong University of Engineering & Technology",
"location": {
"postCode": "Chattogram-4349",
"country": "Bangladesh"
}
},
"email": "[email protected]"
},
{
"first": "Eftekhar",
"middle": [],
"last": "Hossain",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Telecommunication Engineering \u03a8 Chittagong University of Engineering & Technology",
"location": {
"postCode": "Chattogram-4349",
"country": "Bangladesh"
}
},
"email": "[email protected]"
},
{
"first": "Mohammed",
"middle": [
"Moshiul"
],
"last": "Hoque",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Telecommunication Engineering \u03a8 Chittagong University of Engineering & Technology",
"location": {
"postCode": "Chattogram-4349",
"country": "Bangladesh"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recently, detection and categorization of undesired (e. g., aggressive, abusive, offensive, hate) content from online platforms has grabbed the attention of researchers because of its detrimental impact on society. Several attempts have been made to mitigate the usage and propagation of such content. However, most past studies were conducted primarily for English, where low-resource languages like Bengali remained out of the focus. Therefore, to facilitate research in this arena, this paper introduces a novel multilabel Bengali dataset (named M-BAD) containing 15650 texts to detect aggressive texts and their targets. Each text of M-BAD went through rigorous two-level annotations. At the primary level, each text is labelled as either aggressive or non-aggressive. In the secondary level, the aggressive texts have been further annotated into five fine-grained target classes: religion, politics, verbal, gender and race. Baseline experiments are carried out with different machine learning (ML), deep learning (DL) and transformer models, where Bangla-BERT acquired the highest weighted f 1-score in both detection (0.92) and target identification (0.83) tasks. Error analysis of the models exhibits the difficulty to identify contextdependent aggression, and this work argues that further research is required to address these issues.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Recently, detection and categorization of undesired (e. g., aggressive, abusive, offensive, hate) content from online platforms has grabbed the attention of researchers because of its detrimental impact on society. Several attempts have been made to mitigate the usage and propagation of such content. However, most past studies were conducted primarily for English, where low-resource languages like Bengali remained out of the focus. Therefore, to facilitate research in this arena, this paper introduces a novel multilabel Bengali dataset (named M-BAD) containing 15650 texts to detect aggressive texts and their targets. Each text of M-BAD went through rigorous two-level annotations. At the primary level, each text is labelled as either aggressive or non-aggressive. In the secondary level, the aggressive texts have been further annotated into five fine-grained target classes: religion, politics, verbal, gender and race. Baseline experiments are carried out with different machine learning (ML), deep learning (DL) and transformer models, where Bangla-BERT acquired the highest weighted f 1-score in both detection (0.92) and target identification (0.83) tasks. Error analysis of the models exhibits the difficulty to identify contextdependent aggression, and this work argues that further research is required to address these issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social media platforms have become a powerful tool to spontaneously connect people and share information with effortless access to the internet. These platforms provide users with a cloak of anonymity that allows them to speak their opinions publicly. Unfortunately, this power of anonymity is misused to disseminate aggressive, abusive, hatred and illegal content. In the recent past, these mediums have been used to incite religious, political and communal violence (Hartung et al., 2017) . A significant portion of such incidents has been com-municated through textual content (Kumar et al., 2020a; Feldman et al., 2021) . Therefore, it has become crucial to develop automated systems to restrain the proliferation of such undesired or aggressive texts. This issue has been taken seriously in English, German, and other high-resource languages (Caselli et al., 2021; Aksenov et al., 2021) . However, minimal research effort has been made in low-resource languages, including Bengali. Systems developed in English or other languages can not detect detrimental texts written in Bengali due to the significant variations in language constructs and morphological features. Nevertheless, people use their regional language to communicate over social media. Therefore, developing benchmark datasets and regional language tools is monumental to tackle the undesired text detection challenges. This work develops M-BAD containing 15650 texts using a two-level hierarchical annotation schema. In level-1, texts are categorized into binary classes: aggressive or non-aggressive. In level-2, 8289 aggressive texts are further annotated with multilabel targets. These labels are used to identify aggression's target into five fine-grained classes, such as religion, gendered, race, verbal and politics (detailed taxonomy discussed in Section 3). Proper annotation guidelines and the detailed statistics of the dataset is described to ensure M-BAD's quality. Several experiments are performed using ML, DL and transformer models to assess the task. The experiments demonstrate that (i) transformer models are more effective in detecting aggressive texts and their targets than ML/DL counterparts, (ii) covert propagation of aggression using ambiguous, context-dependent and sarcastic words is difficult to identify. The significant contributions of this work can be summarized as follows,",
"cite_spans": [
{
"start": 468,
"end": 490,
"text": "(Hartung et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 580,
"end": 601,
"text": "(Kumar et al., 2020a;",
"ref_id": null
},
{
"start": 602,
"end": 623,
"text": "Feldman et al., 2021)",
"ref_id": null
},
{
"start": 847,
"end": 869,
"text": "(Caselli et al., 2021;",
"ref_id": "BIBREF3"
},
{
"start": 870,
"end": 891,
"text": "Aksenov et al., 2021)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Study two new problems from the perspective of low-resource language (i.e. Bengali), (i) detecting aggressive texts and (ii) identifying the multilabel targets of aggression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Release a new benchmark aggressive dataset labelled with the target of aggression and detailed annotation steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Perform baseline experimentation on the developed dataset (M-BAD) to benchmark the two problems, providing the first insight into this challenging task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Reproducibility: The resources to reproduce the results are available at https://github.com/omar-sharif03/M-BAD. The appendix contains details about data sources, annotators and a few samples of M-BAD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section briefly describes the past studies related to aggression and other undesired content detection concerning non-Bengali and Bengali languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Non-Bengali aggressive text classification: Kumar et al. (2018a) compiled a dataset of 15000 aggression annotated comments in English and Hindi with three classes: overtly aggressive, covertly aggressive, non-aggressive. In their subsequent work (Kumar et al., 2020b) , Bengali aggressive comments were added in the corpus. Early works with neural network techniques such as LSTM (Nikhil et al., 2018) , CNN (Kumari and Singh, 2020) , combination of shallow and deep network (Golem et al., 2018) achieved good accuracy. However, with the arrival of BERT based models, it acquired superior performance and outperformed all the models on these datasets (Risch and Krestel, 2020; Gordeev and Lykova, 2020; . Bhardwaj et al. (2020) developed a multilabel dataset in Hindi with five hostile classes: fake, defamation, offensive, hate, non-hostile. Their baseline system was implemented with m-BERT embedding and SVM. Leite et al. (2020) introduced a multilabel toxic language dataset. The dataset contains 21k tweets manually annotated into seven categories: insult, LGBTQ+phobia, obscene, misogyny, racism, nontoxic and xenophobia. They also performed baseline evaluation with the variation of BERT models. In a similar work, Moon et al. (2020) developed a corpus to detect toxic speech in Korean online news comments.",
"cite_spans": [
{
"start": 246,
"end": 267,
"text": "(Kumar et al., 2020b)",
"ref_id": "BIBREF18"
},
{
"start": 380,
"end": 401,
"text": "(Nikhil et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 408,
"end": 432,
"text": "(Kumari and Singh, 2020)",
"ref_id": "BIBREF20"
},
{
"start": 475,
"end": 495,
"text": "(Golem et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 651,
"end": 676,
"text": "(Risch and Krestel, 2020;",
"ref_id": "BIBREF26"
},
{
"start": 677,
"end": 702,
"text": "Gordeev and Lykova, 2020;",
"ref_id": "BIBREF10"
},
{
"start": 705,
"end": 727,
"text": "Bhardwaj et al. (2020)",
"ref_id": null
},
{
"start": 1222,
"end": 1240,
"text": "Moon et al. (2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Bengali aggressive text classification: No significant research has been conducted yet to detect multilabel aggression in Bengali. The scarcity of benchmark corpora is the primary reason behind this. Few works have been conducted to develop datasets and models in other correlated domains such as hate, abuse, fake and offence. Karim et al. (2021) developed a hate speech dataset of 3000 samples with four categories: political, personal, religious, geopolitical. Emon et al. (2019) presented a dataset comprised of 4.7k abusive Bengali texts collected from online platforms. They proposed LSTM based classifier to categorize texts into seven classes. However, they did not investigate other DL models' performance, which might get similar accuracy with less computational cost. To detect the threat and abusive language, a dataset of 5.6k Bengali comments is created by Chakraborty and Seddiqui (2019) . In recent work, Sharif and Hoque (2021a) introduced a benchmark Bengali aggressive text dataset. They employed a hierarchical annotation schema to divide the dataset into two coarse-grained (aggressive, nonaggressive) and four fine-grained (political, religious, verbal, gendered) aggression classes. In their later work (Sharif and Hoque, 2021b) , they extended the dataset from 7.5k texts to 14k texts.",
"cite_spans": [
{
"start": 871,
"end": 902,
"text": "Chakraborty and Seddiqui (2019)",
"ref_id": "BIBREF4"
},
{
"start": 1226,
"end": 1251,
"text": "(Sharif and Hoque, 2021b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Differences with existing studies: As far as we are concerned, very few works have been accomplished to detect aggressive texts and identify the target of aggression (e.g. religion, gender, race). Existing works (Sharif and Hoque, 2021b; Zampieri et al., 2019; Kumar et al., 2018b) have framed it as a multi-class classification problem and ignored the overlapping phenomena of classes. However, a text can express aggression towards multiple targets simultaneously. Suppose a text has an aggressive write up against political women, expressing political and gendered aggressions. The proposed work addresses the issues that are previously overlooked and differs from the existing research in the following ways, (i) develop a novel Bengali aggressive text dataset annotated with the multiple targets of an aggressive text. As our knowledge goes, this is the first attempt to develop such a dataset in Bengali, (ii) illustrate a detailed annotation guideline which can be followed to develop resources for the similar domains in Bengali and other low-resource languages, (iii) perform experimentation with multilabel classes with various ML, DL and transformer-based models.",
"cite_spans": [
{
"start": 212,
"end": 237,
"text": "(Sharif and Hoque, 2021b;",
"ref_id": null
},
{
"start": 238,
"end": 260,
"text": "Zampieri et al., 2019;",
"ref_id": "BIBREF37"
},
{
"start": 261,
"end": 281,
"text": "Kumar et al., 2018b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This work presents a two-level hierarchical annotation schema to develop a novel multilabel ag-gression dataset in Bengali (M-BAD). Level-1 has two coarse-grained categories: aggressive and nonaggressive. In contrast, level-2 has five fine-grained multilabel target classes (religion, politics, verbal, gender, race) . This work differs from previous work done by Sharif and Hoque (2021b) in two ways; (i) overlapping phenomena between aggression targets are considered, (ii) a new target class (i.e., racial aggression) is added into the M-BAD. Figure 1 illustrates the taxonomic structure of M-BAD. Because of the subjective nature of the dataset, it is crucial to have a clear understanding of the categories. It helps develop a quality dataset by mitigating annotation biases and reducing ambiguities. After analyzing past studies (Sharif and Hoque, 2021b; Bhardwaj et al., 2020; Zampieri et al., 2019; on textual aggression and other related phenomena, we differentiate between the coarse-grained and fine-grained categories.",
"cite_spans": [
{
"start": 274,
"end": 316,
"text": "(religion, politics, verbal, gender, race)",
"ref_id": null
},
{
"start": 364,
"end": 388,
"text": "Sharif and Hoque (2021b)",
"ref_id": null
},
{
"start": 835,
"end": 860,
"text": "(Sharif and Hoque, 2021b;",
"ref_id": null
},
{
"start": 861,
"end": 883,
"text": "Bhardwaj et al., 2020;",
"ref_id": null
},
{
"start": 884,
"end": 906,
"text": "Zampieri et al., 2019;",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 546,
"end": 554,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dataset Development Taxonomy",
"sec_num": "3"
},
{
"text": "Coarse-grained Aggression Classes : The system initially identifies an input text as aggressive (AG) or non-aggressive (NoAG) classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Development Taxonomy",
"sec_num": "3"
},
{
"text": "\u2022 (AG): excite, attack or seek harm to the individual, group or community based on a few criteria such as gender identity, political ideology, sexual orientation, religious belief, race, ethnicity and nationality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Development Taxonomy",
"sec_num": "3"
},
{
"text": "\u2022 (NoAG): do not contain any aggressive statements or express any evil intention to harm others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Development Taxonomy",
"sec_num": "3"
},
{
"text": "Fine-grained Target Classes: An AG text is further classified into five fine-grained categories: religious aggression (ReAG), political aggression (PoAG), verbal aggression (VeAG), gendered aggression (GeAG) and racial aggression (RaAG). Each of the classes is defined in the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Development Taxonomy",
"sec_num": "3"
},
{
"text": "\u2022 ReAG: excite violence by attacking religion, religious organization or religious belief (Catholic, Hindu, Jew, or Islam, etc.) of a community",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Development Taxonomy",
"sec_num": "3"
},
{
"text": "\u2022 PoAG: demean political ideology, provoke followers of political parties, or incite people in against law enforcement agencies and state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Development Taxonomy",
"sec_num": "3"
},
{
"text": "\u2022 VeAG: seek to do evil or harm others, denounce the social status by using curse words, obscene words, outrageous and other threatening languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Development Taxonomy",
"sec_num": "3"
},
{
"text": "\u2022 GeAG: attack an individual or group by making aggressive reference to sexual orientation, sexuality, body parts, or other lewd contents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Development Taxonomy",
"sec_num": "3"
},
{
"text": "\u2022 RaAG: insult or attack some and promote aggression based on race.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Development Taxonomy",
"sec_num": "3"
},
{
"text": "As far as we are concerned, no dataset is available to date for detecting or classifying multilabel aggressive texts and their targets in Bengali. However, the availability of a benchmark dataset is the prerequisite to developing any deep learning-based intelligent text classification system. This drawback motivates us to construct M-BAD: a novel multilabel Bengali aggressive text dataset. This work follows the guidelines and directions given by (Sharif and Hoque, 2021b; Vidgen and Derczynski, 2021) to ensure the quality of the dataset. This section briefly describes the data collection and annotation steps with detailed statistics of M-BAD.",
"cite_spans": [
{
"start": 450,
"end": 475,
"text": "(Sharif and Hoque, 2021b;",
"ref_id": null
},
{
"start": 476,
"end": 504,
"text": "Vidgen and Derczynski, 2021)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "M-BAD: Multilabel Aggression Dataset",
"sec_num": "4"
},
{
"text": "We have manually accumulated 16000 aggressive and non-aggressive texts from different social platforms within the duration from 16 June to 27 December 2021. During this period, we only collected those texts that were posted, composed or shared after 1 January 2020. Potential texts were accumulated from YouTube channels and Facebook pages affiliated with political organizations, religion, newsgroups, artists, authors, celebrities, etc. Appendix A presents detailed statistics of the data collection sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "4.1"
},
{
"text": "Aggressive texts were cumulated from comments and posts that express aggression or excite violence. User profiles were also scanned who promoted, shared, or glorified aggression information to acquire additional texts. On the other hand, non-aggressive posts have been collected from news/comments/posts related to sports, education, entertainment, science and technology. Furthermore, while collecting aggressive texts, many data samples were found that did not express any aggression. Such texts were added to the corpus. We did not store any personal information (name, phone number, birth date, location) of the users during data accumulation. Each sample text is anonymized in the dataset. Thus, we do not know who has posted or created the collected texts. Finally, a few preprocessing filters are applied to remove inappropriate texts. 255 samples are discarded based on the following filtering criteria, (i) contains non-Bengali texts, (ii) has length fewer than three words, (iii) duplication. Remaining 15745 texts passed to the annotators for manual labelling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "4.1"
},
{
"text": "Section 3 describes the annotation schema and class definitions used to annotate the texts. Six annotators carried the annotation: four undergraduate and two graduate students. An expert verified the label in case of disagreement. Appendix B illustrates the detailed demographics of annotators. Annotators were split into three groups (two in each), and each group labelled a different subset of processed texts. To achieve quality annotations, we trained the annotators to define classes and associated examples. We tried to ensure that annotators understood what an aggressive text is and how to determine the target of aggression. Moreover, annotators are carefully guided in the weekly lab meetings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "4.2"
},
{
"text": "Two annotators annotated each text, and the final label was assigned based on the agreement between the annotators. In case of disagreement, an expert resolve the issue through deliberations with the annotators. During the final label assignment, we found 95 texts that did not fall into any defined aggression categories and subsequently discarded them. Finally, we get M-BAD, an aggression dataset annotated with their targets containing 15650 texts. Appendix C shows few samples of M-BAD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "4.2"
},
{
"text": "We measure the inter-annotator agreement us- ing kappa score (Cohen, 1960) to check the validity of annotations. ",
"cite_spans": [
{
"start": 61,
"end": 74,
"text": "(Cohen, 1960)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "4.2"
},
{
"text": "For training and evaluation purposes, the developed M-BAD is divided into the train (80%), test (10%), and validation (10%) split using a stratified strategy. The identical split ratio is used for both coarsegrained and multilabel fine-grained experiments. Table 2 presents the class-wise distribution of the texts for both Level-1 and Level-2. It is noticed that the distributions are slightly imbalanced with Level-2, which will be very challenging to handle in a multilabel setup. To obtain in-depth insights, training set is further analyzed which is reported in Table 3 . The statistics illustrated that in Level-1, NoAG class has the highest number of words (\u2248106k) and unique words (\u224824k) compared to the AG class. Meanwhile, in Level-2, VeAG has the maximum number of words (\u224842k) and unique words (\u224811k) while RaAG class has the lowest (\u22481.7k, \u22481.2k). However, the average number of words per text ranges from 10 to 12 among the aggression categories. Figure 2 shows the histogram of the texts length of each category. It is observed that \u22485000 texts of NoAG class have a length between \u224815-40. On the other hand, most of the length of the texts falls between 5-30 in VeAG class while \u2248 1000 texts of RaAG class has a length < 20. It is also noticed that only a small number of texts have length > 50. We calculated the Jaccard similarity scores between the most 400 frequent words for quantitative analysis. Table 4 presents the similarity values among each pair of categories from Level-1 and Level-2. The VeAG-GeAG pair obtained the highest similarity score (0.50), while the PoAG-RaAG pair got the lowest score (0.16). It is observed that VeAG class has maximum similarity with almost all the classes except RaAG.",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 264,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 567,
"end": 574,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 961,
"end": 969,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1418,
"end": 1425,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Dataset Statistics",
"sec_num": "4.3"
},
{
"text": "Several computational models are investigated to develop the target aware aggression identification system. At first, the investigation is carried out for classifying the aggressive texts, and then we develop models for categorizing the target of the aggression (ReAG, PoAG, VeAG, GeAG, RaAG) considering the multilabel scenario. Machine learning and deep learning-based methods are employed to build the system. This section briefly discussed the techniques and methods used to develop the system.",
"cite_spans": [
{
"start": 262,
"end": 292,
"text": "(ReAG, PoAG, VeAG, GeAG, RaAG)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "5"
},
{
"text": "Two ML-based methods, Logistic Regression (LR) (Sharif and Hoque, 2019) and Naive Bayes with Support Vector Machine (NBSVM) (Wang and Manning, 2012) have been investigated for the classification task. Bag of words (BoW) features are used to train these models. The LR model is built with the 'lbfgs' optimizer and 'l2' regularization technique. Apart from this, the inverse regularization parameter C settled to 1.0. On the other hand, for NBSVM, the additive smoothing (\u03b1) and regularization parameters (C) are settled at 1.0 whereas the interpolation value is selected to \u03b2 = 0.25.",
"cite_spans": [
{
"start": 47,
"end": 71,
"text": "(Sharif and Hoque, 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ML-based methods",
"sec_num": "5.1"
},
{
"text": "Several popular DL methods are also investigated including BiGRU (Marpaung et al., 2021) and pretrained transformers (Vaswani et al., 2017) to identify the multi-label textual aggression.",
"cite_spans": [
{
"start": 65,
"end": 88,
"text": "(Marpaung et al., 2021)",
"ref_id": "BIBREF23"
},
{
"start": 117,
"end": 139,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DL-based Methods",
"sec_num": "5.2"
},
{
"text": "BiGRU+FastText: The FastText (Joulin et al., 2016) embeddings are used as the input of the Bi-GRU model. Before that, a 1D spatial dropout technique is applied over the embedding features and then fed to a BiGRU layer with 80 hidden units. The last time step hidden output from the BiGRU is passed to a 1D global average pooling and a 1D global max-pooling layer. Subsequently, the two pooling layers outputs are concatenated and propagated to the classification layer.",
"cite_spans": [
{
"start": 29,
"end": 50,
"text": "(Joulin et al., 2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DL-based Methods",
"sec_num": "5.2"
},
{
"text": "Pretrained Transformers: In recent years, transformer (Vaswani et al., 2017 ) models trained on multilingual and monolingual settings achieved outstanding result in solving undesired text classification related tasks (Sharif and Hoque, 2021b; . As our task deals with a dataset of low-resource language, we employed three transformer-based models: (i) Multilingual Bidirectional Encoder Representations for transformers (m-BERT) (Devlin et al., 2018 ) (ii) BERT for Bangla language (Bangla-BERT) (Bhattacharjee et al., 2021), and (iii) BERT for Indian languages (Indic-BERT) (Kakwani et al., 2020) . The models have culled from the hugging face 1 transformers library and fine-tuned them with default arguments on the developed dataset.",
"cite_spans": [
{
"start": 54,
"end": 75,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF33"
},
{
"start": 217,
"end": 242,
"text": "(Sharif and Hoque, 2021b;",
"ref_id": null
},
{
"start": 429,
"end": 449,
"text": "(Devlin et al., 2018",
"ref_id": "BIBREF6"
},
{
"start": 575,
"end": 597,
"text": "(Kakwani et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DL-based Methods",
"sec_num": "5.2"
},
{
"text": "Both ML and DL-based models are trained for two classification tasks: coarse-grained and multilabel fine-grained. To allow the reproducibility of the models and mitigate the training complexity, we use identical hyperparameters values for both classification tasks. We employed the Ktrain (Maiya, 2020) wrapper that provides easy training and implementation of the models. For multilabel classification, we enabled the Ktrain default multilabel settings. The BiGRU+FastText model is trained with a learning rate of 7e \u22123 while the transformer models with 8e \u22125 . The models are trained using the triangular policy method (Smith, 2017) for 20 epochs with a batch size of 32. To save the best intermediate models, we utilized the early stopping criterion.",
"cite_spans": [
{
"start": 621,
"end": 634,
"text": "(Smith, 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DL-based Methods",
"sec_num": "5.2"
},
{
"text": "The experiments were carried out in a google collaboratory platform with a GPU environment. The evaluation of the dataset is performed based on the weighted f 1 -score. Due to the highly skewed distribution of the classes, we considered macro f 1 -score (MF1) as our primary metric in multilabel evaluation. Besides, the individual class performance is measured through precision (P), recall (R), and f 1 -score (F1) matrices. Table 5 presents the outcome of the different models on the test set concerning the coarse-grained classification. In terms of weighted f 1 -score (WF1), both LR and NBSVM obtained an identical score of 0.91 while BiGRU + FastText and m-BERT model got a slightly low score (0.90). However, the Bangla-BERT model achieved the highest F1 across the two coarse-grained classes (AG/NoAG = 0.92) and thus outperformed all the models by achieving the highest WF1 score of 0.92. lustrates that the NBSVM obtained the lowest MF1 (0.61) and WF1 score (0.77). Both Indic-BERT and BiGRU+FastText models acquired identical WF1 of 0.79. Meanwhile, macro and weighted f 1 -score is slightly (MF1 \u22484%, WF1 \u2248 1%) improved with the m-BERT model. However, the Bangla-BERT model exceeds all the models by achieving the highest MF1 (0.72) and WF1 (0.83).",
"cite_spans": [],
"ref_spans": [
{
"start": 427,
"end": 434,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "In terms of class-wise performance, Bangla-BERT obtained the highest f 1 -score in four fine-grained aggression classes: ReAG (0.94), PoAG (0.92), VeAG (0.81), and GeAG (0.68). One interesting finding is that in RaAG class, some models (LR, NBSVM, Indic-BERT) did not identify a single instance correctly. Moreover, the models' performance degrades with the classes (GeAG, RaAG) having fewer training samples than other classes. Thus, a large dataset with balanced data distribution needs to be developed for classifying the problematic multilabel samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.1"
},
{
"text": "The results confirmed that Bangla-BERT is the best performing model in both coarse-grained and finegrained classification tasks (Table 5, 6). We perform a thorough error analysis to know the model mistakes across different classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.1.1"
},
{
"text": "Quantitative analysis: Figure 3 shows the confusion matrices for the Bangla-BERT model. Figure 3 (a) depicts that with coarse-grained classification, the model incorrectly identified 73 (out of 807) and 56 (out 758) instances as NoAG and AG texts, respectively. The confusion matrices for fine-grained classes are shown in Figure 3 (b) Qualitative Analysis: Figure 4 shows some correctly and misclassified sample texts from finegrained classification tasks. The output predictions are obtained from the Bangla-BERT model. It is ob-served that the first two samples are correctly classified into different fine-grained aggression classes. However, in the third example, the model was only able to identify the text as ReAG and incorrectly predicted it as VeAG. Similarly, in the case of the last example model, it was not even able to classify it as RaAG. These examples illustrate the underlying difficulties of the multilabel classification problem. From the analysis, we found that the texts implicitly express aggression, which makes it arduous for the model to determine the multiple classes simultaneously. Moreover, some words have extensively appeared in the fine-grained classes. Perhaps, these words confuse the model to distinguish the classes and thus makes the task more difficult. Adding more training samples across all the classes might eradicate the problem to some extent.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 31,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 88,
"end": 97,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 324,
"end": 336,
"text": "Figure 3 (b)",
"ref_id": "FIGREF2"
},
{
"start": 359,
"end": 367,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.1.1"
},
{
"text": "This paper presented a multilabel aggression identification system for Bengali. To accomplish the purpose, this work introduced M-BAD, a multilabel benchmark dataset consisting of 15650 texts. A two-level hierarchical annotation schema has been followed to develop the corpus. Among the levels, Level-1 is concerned with either aggressive or not aggressive, whereas Level-2 is concerned with the targets (religious, political, verbal, gender, racial) of the aggressive texts in a multilabel scenario. Several traditional and state of the art computational models have been investigated for benchmark evaluation. The results exhibit that the Bangla-BERT model obtained the highest weighted f 1 -score of 0.83 for the multilabel classification. The error analysis revealed that it is challenging to identify the multiple targets of aggressive text as words are frequently overlapped across different classes. In future, we aim to mitigate this issue by exploring multitask learning and domain adaption approaches. Moreover, future work considers including more data samples with a significant period to minimize the bias towards a limited set of events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://www.facebook.com/help/1020633957973118, https://www.youtube.com/static?template=terms",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work supported by the ICT Innovation Fund, ICT Division, Ministry of Posts, Telecommunications and Information Technology, Bangladesh.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Data samples were collected from public post/comment threads of Facebook and YouTube. We did not store the profile information of any users. The data collection procedure is consistent with the copyright and terms of service of these organizations 2 . Potential texts were culled from more than 200 Bengali YouTube channels and Facebook pages. The popularity and activity status of a few data sources are presented in table A.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Data Sources",
"sec_num": null
},
{
"text": "Past studies (Suhr et al., 2021; Zhou et al., 2021) on benchmark dataset creation have emphasized knowing about the demographic, geographic, research and other related information of the annotators. Since aggression is a very subjective phenomenon, annotators perspective and experience play a crucial role in developing the dataset. Six students and an expert were involved in our dataset construction process. Annotators demographic information, research experience, the field of research, and personal experience of viewing online aggression are summarized in table B.1.Some key characteristics of the annotators' pool are, (i) native Bengali speakers, (ii) have prior experience of annotation, (iii) not an active member of any political parties, (iv) not hold extreme view against religion, (v) viewed online aggression. Before requiting, the annotators' necessary ethical approval was taken, and they are substantially paid according to university regulations.",
"cite_spans": [
{
"start": 13,
"end": 32,
"text": "(Suhr et al., 2021;",
"ref_id": "BIBREF32"
},
{
"start": 33,
"end": 51,
"text": "Zhou et al., 2021)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B Annotator Demographics",
"sec_num": null
},
{
"text": "The authors would like to state that the examples referred to in the figure C.1 presented as they were accumulated from the source. Authors do not use these examples to hurt individuals or promote aggressive language usage. The goal of this work is to mitigate the propagation of such language. Table B .1: Summary of annotators information. ",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 302,
"text": "Table B",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Data Samples",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fine-grained classification of political bias in German news: A data set and initial experiments",
"authors": [
{
"first": "Dmitrii",
"middle": [],
"last": "Aksenov",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Bourgonje",
"suffix": ""
},
{
"first": "Karolina",
"middle": [],
"last": "Zaczynska",
"suffix": ""
},
{
"first": "Malte",
"middle": [],
"last": "Ostendorff",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Moreno-Schneider",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Rehm",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)",
"volume": "",
"issue": "",
"pages": "121--131",
"other_ids": {
"DOI": [
"10.18653/v1/2021.woah-1.13"
]
},
"num": null,
"urls": [],
"raw_text": "Dmitrii Aksenov, Peter Bourgonje, Karolina Zaczyn- ska, Malte Ostendorff, Julian Moreno-Schneider, and Georg Rehm. 2021. Fine-grained classification of political bias in German news: A data set and initial experiments. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 121-131, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Amitava Das, and Tanmoy Chakraborty. 2020. Hostility detection dataset in hindi",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bhardwaj",
"suffix": ""
},
{
"first": "Shad",
"middle": [],
"last": "Md",
"suffix": ""
},
{
"first": "Asif",
"middle": [],
"last": "Akhtar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ekbal",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Bhardwaj, Md Shad Akhtar, Asif Ekbal, Ami- tava Das, and Tanmoy Chakraborty. 2020. Hostility detection dataset in hindi.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Banglabert: Combating embedding barrier in multilingual models for low-resource language understanding",
"authors": [
{
"first": "Abhik",
"middle": [],
"last": "Bhattacharjee",
"suffix": ""
},
{
"first": "Tahmid",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Kazi",
"middle": [],
"last": "Samin",
"suffix": ""
},
{
"first": "Md",
"middle": [],
"last": "Saiful Islam",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Anindya",
"middle": [],
"last": "Iqbal",
"suffix": ""
},
{
"first": "Rifat",
"middle": [],
"last": "Shahriyar",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhik Bhattacharjee, Tahmid Hasan, Kazi Samin, Md Saiful Islam, M. Sohel Rahman, Anindya Iqbal, and Rifat Shahriyar. 2021. Banglabert: Com- bating embedding barrier in multilingual models for low-resource language understanding. CoRR, abs/2101.00204.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "HateBERT: Retraining BERT for abusive language detection in English",
"authors": [
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Jelena",
"middle": [],
"last": "Mitrovi\u0107",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Granitzer",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)",
"volume": "",
"issue": "",
"pages": "17--25",
"other_ids": {
"DOI": [
"10.18653/v1/2021.woah-1.3"
]
},
"num": null,
"urls": [],
"raw_text": "Tommaso Caselli, Valerio Basile, Jelena Mitrovi\u0107, and Michael Granitzer. 2021. HateBERT: Retraining BERT for abusive language detection in English. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 17-25, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Threat and abusive language detection on social media in bengali language",
"authors": [
{
"first": "Puja",
"middle": [],
"last": "Chakraborty",
"suffix": ""
},
{
"first": "Md. Hanif",
"middle": [],
"last": "Seddiqui",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {
"DOI": [
"10.1109/ICASERT.2019.8934609"
]
},
"num": null,
"urls": [],
"raw_text": "Puja Chakraborty and Md. Hanif Seddiqui. 2019. Threat and abusive language detection on social media in bengali language. In 2019 1st International Con- ference on Advances in Science, Engineering and Robotics Technology (ICASERT), pages 1-6.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A coefficient of agreement for nominal scales. Educational and Psychological Measurement",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "",
"volume": "20",
"issue": "",
"pages": "37--46",
"other_ids": {
"DOI": [
"10.1177/001316446002000104"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Mea- surement, 20(1):37-46.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A deep learning approach to detect abusive bengali text",
"authors": [
{
"first": "Shihab",
"middle": [],
"last": "Estiak Ahmed Emon",
"suffix": ""
},
{
"first": "Joti",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Banarjee",
"suffix": ""
},
{
"first": "Tanni",
"middle": [],
"last": "Kumar Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mittra",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 7th International Conference on Smart Computing Communications (ICSCC)",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {
"DOI": [
"10.1109/ICSCC.2019.8843606"
]
},
"num": null,
"urls": [],
"raw_text": "Estiak Ahmed Emon, Shihab Rahman, Joti Banarjee, Amit Kumar Das, and Tanni Mittra. 2019. A deep learning approach to detect abusive bengali text. In 2019 7th International Conference on Smart Comput- ing Communications (ICSCC), pages 1-5.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "2021. Proceedings of the Fourth Workshop on NLP for Internet Freedom: Censorship, Disinformation, and Propaganda",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Da San",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Martino",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Leberknight",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Feldman, Giovanni Da San Martino, Chris Leberknight, and Preslav Nakov, editors. 2021. Pro- ceedings of the Fourth Workshop on NLP for Internet Freedom: Censorship, Disinformation, and Propa- ganda. Association for Computational Linguistics, Online.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Combining shallow and deep learning for aggressive text detection",
"authors": [
{
"first": "Viktor",
"middle": [],
"last": "Golem",
"suffix": ""
},
{
"first": "Mladen",
"middle": [],
"last": "Karan",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "\u0160najder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "188--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viktor Golem, Mladen Karan, and Jan \u0160najder. 2018. Combining shallow and deep learning for aggressive text detection. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC- 2018), pages 188-198, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT of all trades, master of some",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Gordeev",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Lykova",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "93--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Gordeev and Olga Lykova. 2020. BERT of all trades, master of some. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 93-98, Marseille, France. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Ranking rightwing extremist social media profiles by similarity to democratic and extremist groups",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Hartung",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
},
{
"first": "Franziska",
"middle": [],
"last": "Schmidtke",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "24--33",
"other_ids": {
"DOI": [
"10.18653/v1/W17-5204"
]
},
"num": null,
"urls": [],
"raw_text": "Matthias Hartung, Roman Klinger, Franziska Schmidtke, and Lars Vogel. 2017. Ranking right- wing extremist social media profiles by similarity to democratic and extremist groups. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 24-33, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "NLP-CUET@DravidianLangTech-EACL2021: Investigating visual and textual features to identify trolls from multimodal social media memes",
"authors": [
{
"first": "Eftekhar",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "Omar",
"middle": [],
"last": "Sharif",
"suffix": ""
},
{
"first": "Mohammed Moshiul",
"middle": [],
"last": "Hoque",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
"volume": "",
"issue": "",
"pages": "300--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eftekhar Hossain, Omar Sharif, and Mo- hammed Moshiul Hoque. 2021. NLP- CUET@DravidianLangTech-EACL2021: In- vestigating visual and textual features to identify trolls from multimodal social media memes. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, pages 300-306, Kyiv. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Fasttext. zip: Compressing text classification models",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Matthijs",
"middle": [],
"last": "Douze",
"suffix": ""
},
{
"first": "H\u00e9rve",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.03651"
]
},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H\u00e9rve J\u00e9gou, and Tomas Mikolov. 2016. Fasttext. zip: Compressing text classification models. arXiv preprint arXiv:1612.03651.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages",
"authors": [
{
"first": "Divyanshu",
"middle": [],
"last": "Kakwani",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Satish",
"middle": [],
"last": "Golla",
"suffix": ""
},
{
"first": "N",
"middle": [
"C"
],
"last": "Gokul",
"suffix": ""
},
{
"first": "Avik",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mitesh",
"suffix": ""
},
{
"first": "Pratyush",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages. In Findings of EMNLP.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bharathi Raja Chakravarthi, Md. Azam Hossain, and Stefan Decker. 2021. Deephateexplainer: Explainable hate speech detection in under-resourced bengali language",
"authors": [
{
"first": "",
"middle": [],
"last": "Md",
"suffix": ""
},
{
"first": "Sumon",
"middle": [],
"last": "Karim",
"suffix": ""
},
{
"first": "Tanhim",
"middle": [],
"last": "Kanti Dey",
"suffix": ""
},
{
"first": "Sagor",
"middle": [],
"last": "Islam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Kabir",
"middle": [],
"last": "Mehadi Hasan Menon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hossain",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Md. Rezaul Karim, Sumon Kanti Dey, Tanhim Islam, Sagor Sarker, Mehadi Hasan Menon, Kabir Hossain, Bharathi Raja Chakravarthi, Md. Azam Hossain, and Stefan Decker. 2021. Deephateexplainer: Explain- able hate speech detection in under-resourced bengali language.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "2020a. Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying. European Language Resources Association (ELRA)",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Atul",
"middle": [],
"last": "Kr",
"suffix": ""
},
{
"first": "Bornini",
"middle": [],
"last": "Ojha",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Lahiri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Malmasi",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, Atul Kr. Ojha, Bornini Lahiri, Marcos Zampieri, Shervin Malmasi, Vanessa Murdock, and Daniel Kadar, editors. 2020a. Proceedings of the Second Workshop on Trolling, Aggression and Cyber- bullying. European Language Resources Association (ELRA), Marseille, France.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Benchmarking aggression identification in social media",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Atul",
"middle": [],
"last": "Kr",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Ojha",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. 2018a. Benchmarking aggression identification in social media. In Proceedings of the First Workshop on Trolling, Aggression and Cyber- bullying (TRAC-2018), pages 1-11, Santa Fe, New Mexico, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Evaluating aggression identification in social media",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Atul",
"middle": [],
"last": "Kr",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Ojha",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. 2020b. Evaluating aggression identification in social media. In Proceedings of the Second Workshop on Trolling, Aggression and Cy- berbullying, pages 1-5, Marseille, France. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Aggressionannotated corpus of Hindi-English code-mixed data",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [
"N"
],
"last": "Reganti",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, Aishwarya N. Reganti, Akshit Bha- tia, and Tushar Maheshwari. 2018b. Aggression- annotated corpus of Hindi-English code-mixed data. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "AI_ML_NIT_Patna @ TRAC -2: Deep learning approach for multi-lingual aggression identification",
"authors": [
{
"first": "Kirti",
"middle": [],
"last": "Kumari",
"suffix": ""
},
{
"first": "Jyoti",
"middle": [
"Prakash"
],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "113--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kirti Kumari and Jyoti Prakash Singh. 2020. AI_ML_NIT_Patna @ TRAC -2: Deep learning approach for multi-lingual aggression identification. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 113-119, Marseille, France. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Toxic language detection in social media for Brazilian Portuguese: New dataset and multilingual analysis",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Jo\u00e3o Augusto Leite",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scarton",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "914--924",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jo\u00e3o Augusto Leite, Diego Silva, Kalina Bontcheva, and Carolina Scarton. 2020. Toxic language detec- tion in social media for Brazilian Portuguese: New dataset and multilingual analysis. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Lan- guage Processing, pages 914-924, Suzhou, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "2020. ktrain: A low-code library for augmented machine learning",
"authors": [
{
"first": "S",
"middle": [],
"last": "Arun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maiya",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.10703"
]
},
"num": null,
"urls": [],
"raw_text": "Arun S. Maiya. 2020. ktrain: A low-code library for augmented machine learning. arXiv preprint arXiv:2004.10703.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Hate speech detection in indonesian twitter texts using bidirectional gated recurrent unit",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Marpaung",
"suffix": ""
},
{
"first": "Rita",
"middle": [],
"last": "Rismala",
"suffix": ""
},
{
"first": "Hani",
"middle": [],
"last": "Nurrahmi",
"suffix": ""
}
],
"year": 2021,
"venue": "2021 13th International Conference on Knowledge and Smart Technology (KST)",
"volume": "",
"issue": "",
"pages": "186--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela Marpaung, Rita Rismala, and Hani Nurrahmi. 2021. Hate speech detection in indonesian twitter texts using bidirectional gated recurrent unit. In 2021 13th International Conference on Knowledge and Smart Technology (KST), pages 186-190. IEEE.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "BEEP! Korean corpus of online news comments for toxic speech detection",
"authors": [
{
"first": "Jihyung",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "Junbum",
"middle": [],
"last": "Won Ik Cho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "25--31",
"other_ids": {
"DOI": [
"10.18653/v1/2020.socialnlp-1.4"
]
},
"num": null,
"urls": [],
"raw_text": "Jihyung Moon, Won Ik Cho, and Junbum Lee. 2020. BEEP! Korean corpus of online news comments for toxic speech detection. In Proceedings of the Eighth International Workshop on Natural Language Pro- cessing for Social Media, pages 25-31, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "LSTMs with attention for aggression detection",
"authors": [
{
"first": "Nishant",
"middle": [],
"last": "Nikhil",
"suffix": ""
},
{
"first": "Ramit",
"middle": [],
"last": "Pahwa",
"suffix": ""
},
{
"first": "Mehul",
"middle": [],
"last": "Kumar Nirala",
"suffix": ""
},
{
"first": "Rohan",
"middle": [],
"last": "Khilnani",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "52--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nishant Nikhil, Ramit Pahwa, Mehul Kumar Nirala, and Rohan Khilnani. 2018. LSTMs with attention for aggression detection. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 52-57, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Bagging BERT models for robust aggression identification",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Risch",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Krestel",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "55--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julian Risch and Ralf Krestel. 2020. Bagging BERT models for robust aggression identification. In Pro- ceedings of the Second Workshop on Trolling, Ag- gression and Cyberbullying, pages 55-61, Marseille, France. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Automatic detection of suspicious bangla text using logistic regression",
"authors": [],
"year": 2019,
"venue": "International Conference on Intelligent Computing & Optimization",
"volume": "",
"issue": "",
"pages": "581--590",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omar Sharif and Mohammed Moshiul Hoque. 2019. Automatic detection of suspicious bangla text using logistic regression. In International Conference on Intelligent Computing & Optimization, pages 581- 590. Springer.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Identification and classification of textual aggression in social media: Resource creation and evaluation",
"authors": [],
"year": null,
"venue": "Combating Online Hostile Posts in Regional Languages during Emergency Situation",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omar Sharif and Mohammed Moshiul Hoque. 2021a. Identification and classification of textual aggres- sion in social media: Resource creation and evalua- tion. In Combating Online Hostile Posts in Regional Languages during Emergency Situation, pages 1-12. Springer Nature Switzerland AG.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Tackling cyber-aggression: Identification and finegrained categorization of aggressive texts on social media using weighted ensemble of transformers",
"authors": [],
"year": null,
"venue": "Neurocomputing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.neucom.2021.12.022"
]
},
"num": null,
"urls": [],
"raw_text": "Omar Sharif and Mohammed Moshiul Hoque. 2021b. Tackling cyber-aggression: Identification and fine- grained categorization of aggressive texts on so- cial media using weighted ensemble of transformers. Neurocomputing.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "NLP-CUET@DravidianLangTech-EACL2021: Offensive language detection from multilingual code-mixed text using transformers",
"authors": [
{
"first": "Omar",
"middle": [],
"last": "Sharif",
"suffix": ""
},
{
"first": "Eftekhar",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "Mohammed Moshiul",
"middle": [],
"last": "Hoque",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
"volume": "",
"issue": "",
"pages": "255--261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omar Sharif, Eftekhar Hossain, and Mo- hammed Moshiul Hoque. 2021. NLP- CUET@DravidianLangTech-EACL2021: Offensive language detection from multilingual code-mixed text using transformers. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, pages 255-261, Kyiv. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Cyclical learning rates for training neural networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Leslie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE winter conference on applications of computer vision (WACV)",
"volume": "",
"issue": "",
"pages": "464--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leslie N Smith. 2017. Cyclical learning rates for train- ing neural networks. In 2017 IEEE winter conference on applications of computer vision (WACV), pages 464-472. IEEE.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Crowdsourcing beyond annotation: Case studies in benchmark data collection",
"authors": [
{
"first": "Alane",
"middle": [],
"last": "Suhr",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Vania",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-tutorials.1"
]
},
"num": null,
"urls": [],
"raw_text": "Alane Suhr, Clara Vania, Nikita Nangia, Maarten Sap, Mark Yatskar, Samuel R. Bowman, and Yoav Artzi. 2021. Crowdsourcing beyond annotation: Case stud- ies in benchmark data collection. In Proceedings of the 2021 Conference on Empirical Methods in Natu- ral Language Processing: Tutorial Abstracts, pages 1-6, Punta Cana, Dominican Republic & Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Directions in abusive language training data, a systematic review: Garbage in, garbage out",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2021,
"venue": "PLOS ONE",
"volume": "15",
"issue": "12",
"pages": "1--32",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0243300"
]
},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen and Leon Derczynski. 2021. Directions in abusive language training data, a systematic re- view: Garbage in, garbage out. PLOS ONE, 15(12):1- 32.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Introducing CAD: the contextual abuse dataset",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
},
{
"first": "Patricia",
"middle": [],
"last": "Rossini",
"suffix": ""
},
{
"first": "Rebekah",
"middle": [],
"last": "Tromble",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2289--2303",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.182"
]
},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen, Dong Nguyen, Helen Margetts, Patricia Rossini, and Rebekah Tromble. 2021. Introducing CAD: the contextual abuse dataset. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 2289-2303, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Baselines and bigrams: Simple, good sentiment and topic classification",
"authors": [
{
"first": "I",
"middle": [],
"last": "Sida",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "90--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sida I Wang and Christopher D Manning. 2012. Base- lines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 90-94.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Predicting the type and target of offensive posts in social media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1415--1420",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1144"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Predicting the type and target of offensive posts in social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1415-1420, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Challenges in automated debiasing for toxic language detection",
"authors": [
{
"first": "Xuhui",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "3143--3155",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.274"
]
},
"num": null,
"urls": [],
"raw_text": "Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Yejin Choi, and Noah Smith. 2021. Challenges in auto- mated debiasing for toxic language detection. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 3143-3155, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Taxonomic structure",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Histogram of the text length for each categories",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Confusion matrices of each category for Bangla-BERT model classifies other classes instance (103 out of 232) as VeAG. The appearance of outrageous words in other fine-grained aggressive classes may be the reason for this confusion.",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Some correctly and incorrectly classified samples by the Bangla-BERT model",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"text": "Kappa (\u03ba) score on each annotation level",
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF2": {
"html": null,
"text": "",
"type_str": "table",
"num": null,
"content": "<table><tr><td>presents the \u03ba-score</td></tr></table>"
},
"TABREF4": {
"html": null,
"text": "Number of instances in train, test and validation sets for each category",
"type_str": "table",
"num": null,
"content": "<table><tr><td>Level-1 Level-2</td><td>Class AG NoAG ReAG PoAG VeAG GeAG RaAG</td><td>#Words 80553 106573 30748 28410 42342 13817 1711</td><td>#Unique words 17413 24617 9093 8496 11587 4796 1206</td><td>Avg. #words/text 12.12 18.08 12.85 11.79 10.74 10.57 9.77</td></tr></table>"
},
"TABREF5": {
"html": null,
"text": "",
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF7": {
"html": null,
"text": "Jaccard similarity of 400 most frequent words between each pair of classes",
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF8": {
"html": null,
"text": "",
"type_str": "table",
"num": null,
"content": "<table><tr><td>reports the evaluation results of the mul-</td></tr><tr><td>tilabel fine-grained classification. The outcome il-</td></tr><tr><td>1 https://huggingface.co/</td></tr></table>"
},
"TABREF9": {
"html": null,
"text": "",
"type_str": "table",
"num": null,
"content": "<table><tr><td>: Performance of the Coarse-grained classifi-cation on the test set. Here, BG+FT represents Bi-GRU+FastText model</td></tr></table>"
},
"TABREF11": {
"html": null,
"text": "Fine-grained classification performance on the test set. Here, MF1 indicates the macro f 1 -score",
"type_str": "table",
"num": null,
"content": "<table><tr><td>(a) NoAG</td><td>(b) ReAG</td><td>(c) PoAG</td></tr><tr><td>(d) VeAG</td><td>(e) GeAG</td><td>(f) RaAG</td></tr></table>"
},
"TABREF12": {
"html": null,
"text": "",
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>presents the false-</td></tr><tr><td colspan=\"2\">negative rate (FNR) of the fine-grained categories.</td></tr><tr><td colspan=\"2\">We noticed that the FNR is very high with GeAG</td></tr><tr><td colspan=\"2\">(0.34) class while ReAG (0.065) and PoAG (0.08)</td></tr><tr><td colspan=\"2\">classes FNR is deficient.</td></tr><tr><td/><td>False negative Rate</td></tr><tr><td>ReAG</td><td>20/305 (0.065)</td></tr><tr><td>PoAG</td><td>23/275 (0.08)</td></tr><tr><td>VeAG</td><td>83/472 (0.17)</td></tr><tr><td>GeAG</td><td>57/167 (0.34)</td></tr><tr><td>RaAG</td><td>4/28 (0.14)</td></tr></table>"
},
"TABREF13": {
"html": null,
"text": "Error analysis for each fine-grained category",
"type_str": "table",
"num": null,
"content": "<table/>"
}
}
}
}