|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:06:07.104044Z" |
|
}, |
|
"title": "ur-iw-hnt at GermEval 2021: An Ensembling Strategy with Multiple BERT Models", |
|
"authors": [ |
|
{ |
|
"first": "Nam", |
|
"middle": [], |
|
"last": "Hoai", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Information Science University of Regensburg", |
|
"location": { |
|
"settlement": "Regensburg", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Information Science University of Regensburg", |
|
"location": { |
|
"settlement": "Regensburg", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Udo", |
|
"middle": [], |
|
"last": "Kruschwitz", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Regensburg", |
|
"location": { |
|
"settlement": "Regensburg", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes our approach (ur-iw-hnt) for the Shared Task of GermEval2021 to identify toxic, engaging, and fact-claiming comments. We submitted three runs using an ensembling strategy by majority (hard) voting with multiple different BERT models of three different types: German-based, Twitter-based, and multilingual models. All ensemble models outperform single models, while BERTweet is the winner of all individual models in every subtask. Twitter-based models perform better than GermanBERT models, and multilingual models perform worse but by a small margin.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes our approach (ur-iw-hnt) for the Shared Task of GermEval2021 to identify toxic, engaging, and fact-claiming comments. We submitted three runs using an ensembling strategy by majority (hard) voting with multiple different BERT models of three different types: German-based, Twitter-based, and multilingual models. All ensemble models outperform single models, while BERTweet is the winner of all individual models in every subtask. Twitter-based models perform better than GermanBERT models, and multilingual models perform worse but by a small margin.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Moderation of popular social media networks is a difficult task. Facebook alone has almost 2.8 billion active users on April 2021 (Kemp, 2021) . Moderating discussions between users simultaneously all day is an impossible task, so moderators need help with this work. Also, fully automated solutions for content moderation are not possible, and human input is still required (Cambridge Consultants, 2019 ). An AI-based helper solution for harmful content detection is needed to make social networking less toxic and more pleasant instead.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 142, |
|
"text": "(Kemp, 2021)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 403, |
|
"text": "Consultants, 2019", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Shared Task of GermEval2021 focuses on highly relevant topics for moderators and community managers to moderate online discussion platforms (Risch et al., 2021) . The challenge is not to specialize in one broad NLP task like harmful content detection but to detect other essential categories like which comments are engaging or factclaiming.", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 164, |
|
"text": "(Risch et al., 2021)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We participated in all three subtasks (toxic, engaging and fact-claiming comment classification) to test our ensemble model to see whether multiple BERT-based models provide robust performance for different tasks without further customization. Moderators would benefit from a working system without having to change models or settings all the time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This report discusses in detail the three runs we submitted in the GermEval2021 Shared Task (Risch et al., 2021) . We start with a brief reflection on related work, only focussing on aspects that are closely aligned with the subtasks. We then explain the dataset and the shared tasks in more detail. Next, we present our experiments, some discussions of the results, and we finally draw some conclusions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 112, |
|
"text": "(Risch et al., 2021)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To encourage reproducibility of experimental work, we make all code available via GitHub 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Detecting harmful content in social media platforms is not only a monolingual but a multilingual issue. A multilingual toxic text detection classifier uses a fusion strategy employing mBERT and XLM-RoBERTa on imbalanced sample distributions (Song et al., 2021) . Deep learning ensembles also show their effectiveness in hate speech detection (Zimmerman et al., 2019) . A taxonomy of engaging comments contains different possible classifications (Risch and Krestel, 2020) . With the increasing spread of misinformation, more collaborations with IT companies specialized in factchecking and more intelligence and monitoring tools are available to help to identify harmful content (Arnold, 2020). An attempt to fully automate fact-checking is the tool called ClaimBuster (Hassan et al., 2015) . Another tool named CrowdTangle monitors social media platforms and alerts the user if specific keywords are triggered so manual factclaim checking can be done (Arnold, 2020). In addition, an annotation schema for claim detection is also available (Konstantinovskiy et al., 2021) . (Risch et al., 2021) . Since the labels are imbalanced, we first applied a stratified split onto the dataset so that 80% is for training. We then again apply a stratified split on what is left into two halves, the first part is the development set, and the second part is the holdout set for evaluation which we call the evaluation set here. After training, the ensemble strategy predicts the test dataset, consisting of 944 comments. Table 1 shows the imbalance in favor of the negative label. The organizers of GermEval2021 chose the metric Krippendorff's alpha to check each task's intercoder reliability (Risch et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 260, |
|
"text": "(Song et al., 2021)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 366, |
|
"text": "(Zimmerman et al., 2019)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 445, |
|
"end": 470, |
|
"text": "(Risch and Krestel, 2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 768, |
|
"end": 789, |
|
"text": "(Hassan et al., 2015)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1039, |
|
"end": 1070, |
|
"text": "(Konstantinovskiy et al., 2021)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1073, |
|
"end": 1093, |
|
"text": "(Risch et al., 2021)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1681, |
|
"end": 1701, |
|
"text": "(Risch et al., 2021)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1508, |
|
"end": 1515, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Toxic comments include many harmful and dangerous offenses like \"hate speech, insults, threats, vulgar advertisements and misconceptions about political and religious tendencies\" (Song et al., 2021) . Such behavior only leads to users leaving the discussion or manual bans by the moderator, which can be overwhelming depending on the number of active toxic users (Risch and Krestel, 2020) . For this subtask, the annotator agreement in the usage of insults, vulgar and sarcastic language is 0.73 < \u03b1 < 0.89, and in the discrimination, discredition, accusations of lying or threats of violence, the agreement is at 0.83 < \u03b1 < 0.90 (Risch et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 198, |
|
"text": "(Song et al., 2021)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 363, |
|
"end": 388, |
|
"text": "(Risch and Krestel, 2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 630, |
|
"end": 650, |
|
"text": "(Risch et al., 2021)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Toxic Comment Classification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Engaging comments are, in general, attractive for users to participate in online discussions and get more interactions with other online users in the form of replies and upvotes. A taxonomy of engaging comments has been proposed to identify these comments for detection and classification, so moderators and community managers can reward these comments or posts (Risch and Krestel, 2020) . This task has three different categories (Risch et al., 2021) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 387, |
|
"text": "(Risch and Krestel, 2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 431, |
|
"end": 451, |
|
"text": "(Risch et al., 2021)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Engaging Comment Classification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Juristification, solution proposals, sharing of personal experiences (0.71 < \u03b1 < 0.89)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Engaging Comment Classification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Empathy with regard to other users' standpoints (0.79 < \u03b1 < 0.91)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Engaging Comment Classification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Polite interaction, mutual respect, mediation (0.85 < \u03b1 < 1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Engaging Comment Classification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Detecting factual claims is part of the fact-checking process (Konstantinovskiy et al., 2021; Babakar and Moy, 2016; Nakov et al., 2021) . The challenge here is to identify claims that have not been factchecked before and go beyond one sentence that fits into this subtask (Babakar and Moy, 2016) . Annotator's agreement in fact assertion and evidence provision is at 0.73 < \u03b1 < 0.84 (Risch et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 93, |
|
"text": "(Konstantinovskiy et al., 2021;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 94, |
|
"end": 116, |
|
"text": "Babakar and Moy, 2016;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 117, |
|
"end": 136, |
|
"text": "Nakov et al., 2021)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 296, |
|
"text": "(Babakar and Moy, 2016)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 384, |
|
"end": 404, |
|
"text": "(Risch et al., 2021)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fact-Claiming Comment Classification", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For our system architecture (see Figure 1 ), we use three Python libraries/tools. Deep-Translator 2 translates all the German comments into English by choosing an external service, in our case, the free public Google Translate service. We use two different libraries for classification: Ernie 3 and Simple Transformers 4 . Both work on different versions of HuggingFace's Transformers (Wolf et al., 2020) and thus differently: Ernie is a beginner-friendly library last updated in 2020, based on Keras / Ten-sorFlow 2, and uses the optimizer Adam (Kingma and Ba, 2015). Simple Transformers is based on PyTorch and has more extensive options for hyperparameter tuning and training customizations with AdamW (Loshchilov and Hutter, 2019) as the default optimizer. The default hyperparameter values for our experiments, as recommended for BERT, are in Table 2 . The only pre-processing step is the tokenization by each BERT model using these libraries. Because of time constraints, crossvalidation has not been conducted. and evaluating the development and holdout set, the chosen models' predictions go to the ensemble strategy, which finally predicts the test dataset by majority (hard) voting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 385, |
|
"end": 404, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 705, |
|
"end": 734, |
|
"text": "(Loshchilov and Hutter, 2019)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 41, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 848, |
|
"end": 855, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System Architecture", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "BERT, which stands for Bidirectional Encoder Representations from Transformers, is a language model developed by Google and is known for its state-of-the-art (SOTA) performance in several NLP tasks (Devlin et al., 2019) . The Shared Task consists of German Facebook comments, so we see it fit to choose German-based and Englishtranslation-based models. Because Facebook comments have some similarity with Twitter comments, we also decide on Twitter-based models. There are several versions of BERT with different pre-training or fine-tuning:", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 219, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT and its variants", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 German-based BERT models -DBMDZ GermanBERT 5 -Deepset.AI GermanBERT (Chan et al., 2020) \u2022 Multilingual BERT models 5 https://huggingface.co/dbmdz/ bert-base-german-cased -mBERT Cased (Devlin et al., 2019) -XLM-RoBERTa (Conneau et al., 2019) \u2022 Twitter-based BERT models -BERTweet (Nguyen et al., 2020) -XLM-T (Barbieri et al., 2021) Table 3 shows the result of each BERT model on the evaluation/holdout set and on the test dataset with its labels for subtask 1 (which was provided after the submissions had been received).", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 89, |
|
"text": "(Chan et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 185, |
|
"end": 206, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 220, |
|
"end": 242, |
|
"text": "(Conneau et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 310, |
|
"end": 333, |
|
"text": "(Barbieri et al., 2021)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 334, |
|
"end": 341, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BERT and its variants", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The Ensemble Technique is a combination of classifiers' predictions for further classification (Opitz and Maclin, 1999) . There are two popular types of ensembling: Bagging (Breiman, 1996) and Boosting (Freund and Schapire, 1999) . Ensembles have been shown to be highly effective for a variety of NLP tasks, e.g., in the current top 10 of SQuAD 2.0 6 , all models are ensembles. We went for simple majority ensembling using hard voting, which classifies with the largest sum of predictions from all models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 119, |
|
"text": "(Opitz and Maclin, 1999)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 188, |
|
"text": "(Breiman, 1996)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 229, |
|
"text": "(Freund and Schapire, 1999)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ensembling Strategy", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We decided to use the three runs for the Shared Task to test different combinations of BERT models for a robust and consistent result in the test dataset. That is why we chose five models for the first run, seven models for the second run, and for the third run, nine models ensembled together. The first ensemble consists of two GermanBERT models, the English BERT base model, one Twitter-based Classifier Language macro F1 eval macro F1 test 1) BERT base Uncased (Devlin et al., 2019) English .6493 .6329 2) mBERT base Cased (Devlin et al., 2019) English .6247 .6194 3) mBERT base Cased (Devlin et al., 2019) German .6286 .6086 4) DBMDZ GermanBERT 5 German .6472 .6591 5) Deepset.AI GermanBERT (Chan et al., 2020) German .6481 .6608 6) BERTweet (Nguyen et al., 2020) English .6798 .6832 7) XLM-T (Barbieri et al., 2021) English .6553 .6681 8) XLM-T (Barbieri et al., 2021) German .6342 .6502 9) XLM-R base (Conneau et al., 2019) English .6421 .6482 10) XLM-R base (Conneau et al., 2019) German .3959 .3862 Tables 4, 5 , and 6, with precision, recall, and macro-averaged F1 score as the scoring metrics. The numbers in the column \"Ensemble\" refer to the classifier numbers from Table 3 ,2,3,4,5,6,8 .7124 .6642 .6875 3 1,2,3,4,5,6,7,8,9 .7003 .6542 .6764 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 465, |
|
"end": 486, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 527, |
|
"end": 548, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 610, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 696, |
|
"end": 715, |
|
"text": "(Chan et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 768, |
|
"text": "(Nguyen et al., 2020)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 798, |
|
"end": 821, |
|
"text": "(Barbieri et al., 2021)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 851, |
|
"end": 874, |
|
"text": "(Barbieri et al., 2021)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 908, |
|
"end": 930, |
|
"text": "(Conneau et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 966, |
|
"end": 988, |
|
"text": "(Conneau et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1187, |
|
"end": 1249, |
|
"text": ",2,3,4,5,6,8 .7124 .6642 .6875 3 1,2,3,4,5,6,7,8,9 .7003 .6542", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1008, |
|
"end": 1019, |
|
"text": "Tables 4, 5", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 1179, |
|
"end": 1186, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ensembling Strategy", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Our experiments demonstrate that BERTweet was showing better performance than every other model in every subtask, which is a surprise. We expected the monolingual GermanBERT models to perform best because of the cultural context in the integrated German language. Multilingual BERT models perform worst but by a close margin. Because of an overfitting issue, the tenth BERT classifier XLM-R performed faultily, only recognizing negative labels and thus the low macro-averaged F1 scores. The margin of each ensemble performance in subtasks 1 and 3 is around 1%, and for subtask 2 only around 2%. We conclude that the ensembling strategy shows robustness and consistency for the choice of good classifiers in a big enough amount for each task, and it could be a legitimate approach for the overfitting problem. Because of time constraints, no cross-validation was conducted, and since the holdout set was chosen not to be released for training, there is still improvement in the training quality of the BERT models so that more experiments are needed. Each part of a system like the GPU influences the training accuracy, so an identical replication is difficult to achieve, leading to different results. That is why reproducibility is not guaranteed, even if a manual seed is set 7 . Also, the amount and the imbalance of the dataset can lead to overfitting and lower scoring.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We presented an ensemble strategy using ten BERT classifiers, including the use of machine translation, demonstrating robustness across tasks. While ensembles perform best overall, Twitter-based models (using standard BERT hyperparameter values) with translation to English perform best in a single model setting. This observation might change if cross-validation, early stopping, hyperparameter tuning, and other optimization techniques for each model are available for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://github.com/HN-Tran/ GermEval2021", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments co-located with KONVENS", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/nidhaloff/ deep-translator 3 https://github.com/labteral/ernie 4 https://simpletransformers.ai/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://rajpurkar.github.io/ SQuAD-explorer/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://pytorch.org/docs/stable/ notes/randomness.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by the project COURAGE: A Social Media Companion Safeguarding and Educating Students funded by the Volkswagen Foundation, grant number 95564.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The challenges of online fact checking", |
|
"authors": [ |
|
{ |
|
"first": "Phoebe", |
|
"middle": [ |
|
"Arnold" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Full Fact", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phoebe Arnold. 2020. The challenges of online fact checking. Technical report, Full Fact, London, UK.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The State of Automated Factchecking", |
|
"authors": [ |
|
{ |
|
"first": "Mevan", |
|
"middle": [], |
|
"last": "Babakar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Moy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Full Fact", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mevan Babakar and Will Moy. 2016. The State of Au- tomated Factchecking. Technical report, Full Fact, London, UK.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "XLM-T: A multilingual language model toolkit for twitter", |
|
"authors": [ |
|
{ |
|
"first": "Francesco", |
|
"middle": [], |
|
"last": "Barbieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Espinosa Anke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jos\u00e9", |
|
"middle": [], |
|
"last": "Camacho-Collados", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Francesco Barbieri, Luis Espinosa Anke, and Jos\u00e9 Camacho-Collados. 2021. XLM-T: A multilin- gual language model toolkit for twitter. CoRR, abs/2104.12250.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Bagging predictors", |
|
"authors": [ |
|
{ |
|
"first": "Leo", |
|
"middle": [], |
|
"last": "Breiman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Mach. Learn", |
|
"volume": "24", |
|
"issue": "2", |
|
"pages": "123--140", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leo Breiman. 1996. Bagging predictors. Mach. Learn., 24(2):123-140.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Use of AI Online in online content moderation", |
|
"authors": [ |
|
{ |
|
"first": "Cambridge", |
|
"middle": [], |
|
"last": "Consultants", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cambridge Consultants. 2019. Use of AI Online in on- line content moderation.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "German's next language model", |
|
"authors": [ |
|
{ |
|
"first": "Branden", |
|
"middle": [], |
|
"last": "Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Schweter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timo", |
|
"middle": [], |
|
"last": "M\u00f6ller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 28th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6788--6796", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Branden Chan, Stefan Schweter, and Timo M\u00f6ller. 2020. German's next language model. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 6788-6796, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL-HLT.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A short introduction to boosting", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Freund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Schapire", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Journal-Japanese Society For Artificial Intelligence", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Freund and Robert Schapire. 1999. A short intro- duction to boosting. Journal-Japanese Society For Artificial Intelligence, 14(771-780):1612.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The Quest to Automate Fact-Checking", |
|
"authors": [ |
|
{ |
|
"first": "Naeemul", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Adair", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Hamilton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Tremayne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cong", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 computation+ journalism symposium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Naeemul Hassan, Bill Adair, J. Hamilton, C. Li, M. Tremayne, Jun Yang, and Cong Yu. 2015. The Quest to Automate Fact-Checking. In Proceedings of the 2015 computation+ journalism symposium.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Digital 2021 april statshot reportdatareportal -global digital insights", |
|
"authors": [ |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Kemp", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Simon Kemp. 2021. Digital 2021 april statshot report - datareportal -global digital insights.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [ |
|
"Lei" |
|
], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "3rd International Conference on Learning Representations, ICLR 2015 -Conference Track Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. 3rd Inter- national Conference on Learning Representations, ICLR 2015 -Conference Track Proceedings, pages 1-15.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Toward automated factchecking: Developing an annotation schema and benchmark for consistent automated claim detection", |
|
"authors": [ |
|
{ |
|
"first": "Lev", |
|
"middle": [], |
|
"last": "Konstantinovskiy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Price", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mevan", |
|
"middle": [], |
|
"last": "Babakar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arkaitz", |
|
"middle": [], |
|
"last": "Zubiaga", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Digital Threats: Research and Practice", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lev Konstantinovskiy, Oliver Price, Mevan Babakar, and Arkaitz Zubiaga. 2021. Toward automated factchecking: Developing an annotation schema and benchmark for consistent automated claim detection. Digital Threats: Research and Practice, 2(2).", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Decoupled weight decay regularization. 7th International Conference on Learning Representations", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Loshchilov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Hutter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. 7th International Con- ference on Learning Representations, ICLR 2019.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Automated fact-checking for assisting human fact-checkers", |
|
"authors": [ |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maram", |
|
"middle": [], |
|
"last": "Corney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Firoj", |
|
"middle": [], |
|
"last": "Hasanain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tamer", |
|
"middle": [], |
|
"last": "Alam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Elsayed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Barr\u00f3n-Cede\u00f1o", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaden", |
|
"middle": [], |
|
"last": "Papotti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giovanni Da San", |
|
"middle": [], |
|
"last": "Shaar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Martino", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Preslav Nakov, David P. A. Corney, Maram Hasanain, Firoj Alam, Tamer Elsayed, Alberto Barr\u00f3n-Cede\u00f1o, Paolo Papotti, Shaden Shaar, and Giovanni Da San Martino. 2021. Automated fact-checking for assist- ing human fact-checkers. CoRR, abs/2103.07769.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "BERTweet: A pre-trained language model for English Tweets", |
|
"authors": [ |
|
{ |
|
"first": "Thanh", |
|
"middle": [], |
|
"last": "Dat Quoc Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anh", |
|
"middle": [ |
|
"Tuan" |
|
], |
|
"last": "Vu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 -Demos", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9--14", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. BERTweet: A pre-trained language model for English Tweets. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, Online, November 16-20, 2020, pages 9-14. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Popular Ensemble Methods: An Empirical Study", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Opitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Maclin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "169--198", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Opitz and Richard Maclin. 1999. Popular En- semble Methods: An Empirical Study. Journal of Artificial Intelligence Research, 11:169-198.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Top comment or flop comment? Predicting and explaining user engagement in online news discussions", |
|
"authors": [ |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Risch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralf", |
|
"middle": [], |
|
"last": "Krestel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 14th International AAAI Conference on Web and Social Media", |
|
"volume": "2020", |
|
"issue": "", |
|
"pages": "579--589", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julian Risch and Ralf Krestel. 2020. Top comment or flop comment? Predicting and explaining user en- gagement in online news discussions. Proceedings of the 14th International AAAI Conference on Web and Social Media, ICWSM 2020, pages 579-589.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Overview of the GermEval 2021 shared task on the identification of toxic, engaging, and fact-claiming comments", |
|
"authors": [ |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Risch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anke", |
|
"middle": [], |
|
"last": "Stoll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lena", |
|
"middle": [], |
|
"last": "Wilms", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Wiegand", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments colocated with KONVENS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julian Risch, Anke Stoll, Lena Wilms, and Michael Wiegand. 2021. Overview of the GermEval 2021 shared task on the identification of toxic, engaging, and fact-claiming comments. In Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments co- located with KONVENS, pages 1-12.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A study of multilingual toxic text detection approaches under imbalanced sample distribution", |
|
"authors": [ |
|
{ |
|
"first": "Guizhe", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Degen", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Information (Switzerland)", |
|
"volume": "12", |
|
"issue": "5", |
|
"pages": "1--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guizhe Song, Degen Huang, and Zhifeng Xiao. 2021. A study of multilingual toxic text detection ap- proaches under imbalanced sample distribution. In- formation (Switzerland), 12(5):1-16.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Remi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariama", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Drame", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--45", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Improving hate speech detection with deep learning ensembles", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Zimmerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Fox", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Udo", |
|
"middle": [], |
|
"last": "Kruschwitz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "LREC 2018 -11th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2546--2553", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Zimmerman, Chris Fox, and Udo Kruschwitz. 2019. Improving hate speech detection with deep learning ensembles. LREC 2018 -11th Interna- tional Conference on Language Resources and Eval- uation, pages 2546-2553.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>3 Dataset & Shared Task</td></tr><tr><td>The dataset for the Shared Task of GermEval2021</td></tr><tr><td>consists of 3,244 annotated user discussion com-</td></tr><tr><td>ments from a Facebook page of the German news</td></tr><tr><td>broadcast in the timeframe of February to July</td></tr><tr><td>2019, labeled by four annotators in three differ-</td></tr><tr><td>ent categories for binary classification: Toxic com-</td></tr><tr><td>ments, engaging comments and fact-claiming com-</td></tr><tr><td>ments</td></tr></table>", |
|
"text": "Provided training and test dataset" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>: BERT classifier result for subtask 1</td></tr><tr><td>model (BERTweet), and one multilingual model,</td></tr><tr><td>so we have diversity for classification. For the</td></tr><tr><td>second ensemble, one multilingual model and one</td></tr><tr><td>Twitter-based model are added. The third ensemble</td></tr><tr><td>has every classifier except the last one.</td></tr><tr><td>The results for each subtask are in</td></tr></table>", |
|
"text": "" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">Run Ensemble</td><td colspan=\"2\">P test R test macro F1 test</td></tr><tr><td>1</td><td>1,3,4,5,6</td><td>.7047 .6588</td><td>.6810</td></tr><tr><td>2</td><td>1,2,3,4,5,6,8</td><td>.7183 .6635</td><td>.6898</td></tr><tr><td>3</td><td colspan=\"2\">1,2,3,4,5,6,7,8,9 .7168 .6529</td><td>.6833</td></tr></table>", |
|
"text": "." |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">Run Ensemble</td><td colspan=\"2\">P test R test macro F1 test</td></tr><tr><td>1</td><td>1,3,4,5,6</td><td>.7228 .6653</td><td>.6929</td></tr><tr><td>2</td><td>1</td><td/><td/></tr></table>", |
|
"text": "Ensemble result for subtask 1" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">Run Ensemble</td><td colspan=\"2\">P test R test macro F1 test</td></tr><tr><td>1</td><td>1,3,4,5,6</td><td>.7791 .7310</td><td>.7543</td></tr><tr><td>2</td><td>1,2,3,4,5,6,8</td><td>.7756 .7454</td><td>.7602</td></tr><tr><td>3</td><td colspan=\"2\">1,2,3,4,5,6,7,8,9 .7725 .7438</td><td>7579</td></tr></table>", |
|
"text": "Ensemble result for subtask 2" |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Ensemble result for subtask 3" |
|
} |
|
} |
|
} |
|
} |