ACL-OCL / Base_JSON /prefixK /json /konvens /2021.konvens-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:56.216705Z"
},
"title": "forumBERT: Topic Adaptation and Classification of Contextualized Forum Comments in German",
"authors": [
{
"first": "Ayush",
"middle": [],
"last": "Yadav",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Benjamin",
"middle": [],
"last": "Milde",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Group",
"institution": "Universit\u00e4t Hamburg",
"location": {
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Online user comments in public forums are often associated with low quality, hate speech or even excessive demands for moderation. To better exploit their constructive and deliberate potential, we present forumBERT. forum-BERT is built on top of the BERT architecture and uses a shared weight and late fusion technique to better determine the quality and relevance of a comment on a forum article. Our model integrates article context with comments for the online/offline comment moderation task. This is done using a two step procedure: self-supervised BERT language model fine tuning for topic adaptation followed by integration into the forumBERT architecture for online/offline classification. We present evaluation results on various classification tasks of the public One Million Post dataset, as well as on the online/offline comment moderation task on 998,158 labelled comments from NDR.de, a popular German broadcaster's website. fo-rumBERT significantly outperforms baseline models on the NDR dataset and also outperforms all existing advanced baseline models on the OMP dataset. Additionally we conduct two studies on the influence of topic adaptation on the general comment moderation task.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Online user comments in public forums are often associated with low quality, hate speech or even excessive demands for moderation. To better exploit their constructive and deliberate potential, we present forumBERT. forum-BERT is built on top of the BERT architecture and uses a shared weight and late fusion technique to better determine the quality and relevance of a comment on a forum article. Our model integrates article context with comments for the online/offline comment moderation task. This is done using a two step procedure: self-supervised BERT language model fine tuning for topic adaptation followed by integration into the forumBERT architecture for online/offline classification. We present evaluation results on various classification tasks of the public One Million Post dataset, as well as on the online/offline comment moderation task on 998,158 labelled comments from NDR.de, a popular German broadcaster's website. fo-rumBERT significantly outperforms baseline models on the NDR dataset and also outperforms all existing advanced baseline models on the OMP dataset. Additionally we conduct two studies on the influence of topic adaptation on the general comment moderation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Online user comments, such as those on journalistic content or product features are often associated with low quality, hate speech or even excessive demands for moderation. Automating this moderation or aspects of it can be considered to be of high practical interest. One of the key challenges of forum comment moderation is the specificity of category of classification. Forum comments have to be moderated for hate-speech, discrimination, spam among many other generally discussed classification tasks. Additionally comments on forum articles must also be moderated for relevance and contribution to the discourse. Schabus et al. (2017) and Schabus and Skowron (2018) introduces the idea of applied classification, wherein comments are annotated across multiple forum specific categories and classification models are created for each category. In this paper we focus on the more general \"comment moderation task\" on news forum comments. In this task, comments can be classified into one of two categories, either online or offline, where an online classification represents a comment that is accepted by the forum moderators and an offline classification represents comments that have been taken down by the forum moderators.",
"cite_spans": [
{
"start": 618,
"end": 639,
"text": "Schabus et al. (2017)",
"ref_id": "BIBREF15"
},
{
"start": 644,
"end": 670,
"text": "Schabus and Skowron (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years, the Natural Language Processing community has experienced a substantial shift towards using pre-trained models. Their usage on large corpora has proved to be beneficial in learning general language representations and has shown improvement in text classification and many other NLP tasks, which has also helped avoid training large language models from scratch. However, the lack of portability of NLP models to new conditions is a central issue in NLP. For many target applications like comment moderation on niche public forums, labelled data might be lacking and there might not be enough unlabelled data to train a general language model. These conditions press us to visit domain adaptation to improve the language model. Therefore, in this paper we present forumBERT, a modification to the BERT architecture which uses two weight shared BERT models and a late fusion technique to better determine a comment's quality and relevance on a forum article. We also extend the work by Rietzler et al. (2020) and investigate the influence of a domain adapted BERT language model on the downstream comment moderation accuracy as a function of labelled downstream training examples. In particular, the contributions of our paper are:",
"cite_spans": [
{
"start": 1001,
"end": 1023,
"text": "Rietzler et al. (2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work by",
"sec_num": null
},
{
"text": "\u2022 We present the forumBERT architecture to determine a comment's quality and relevance on a forum post.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work by",
"sec_num": null
},
{
"text": "\u2022 We introduce the NDR dataset which is used for the comment moderation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work by",
"sec_num": null
},
{
"text": "\u2022 We show that forumBERT outperforms baseline models on the comment moderation task. forumBERT achieves state of the art results on seven classification tasks on the One Million Posts Dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work by",
"sec_num": null
},
{
"text": "\u2022 We analyse the influence of topic adaptation on the forumBERT architecture by varying the number of labelled datapoints in the comment moderation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work by",
"sec_num": null
},
{
"text": "\u2022 We also analyse the influence of the number of training steps of the BERT language model and the results on the downstream comment moderation classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work by",
"sec_num": null
},
{
"text": "This paper has been structured in the following way: Section 2 introduces the BERT architecture and mentions existing comment moderation architectures and some relevant BERT model adaptations. Section 3 describes the NDR dataset and the NDR topic datasets. Section 4 introduces forum-BERT and the training procedure followed. Section 5 evaluates forumBERT and BERT on the NDR dataset and the OMP dataset. Section 6 contains our topic adaptation experiments on the effectiveness of topic adaptation and the influence of topic adaptation as a function of labelled training examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work by",
"sec_num": null
},
{
"text": "Pre-trained models using large corpora have dominated the task of text classification. This began with pre-trained word embeddings such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) and now in the current paradigm, pre-trained models like ELMo (Peters et al., 2018) , BERT (Devlin et al., 2019) , GPT/GPT2 (Radford et al., 2019) , XLNet , have achieved state of the art results in a wide spectrum of NLP taks including text classification.",
"cite_spans": [
{
"start": 148,
"end": 170,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 181,
"end": 206,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF8"
},
{
"start": 269,
"end": 290,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 293,
"end": 319,
"text": "BERT (Devlin et al., 2019)",
"ref_id": null
},
{
"start": 331,
"end": 353,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "BERT (Devlin et al., 2019) is an amalgamation of several key findings in NLP research such as contextualized word representations, sub word tokenization (Wu et al., 2016) and transformers (Vaswani et al., 2017) . The main innovations are the unique learning methods adopted by BERT. The BERT language model is trained to optimize on two tasks, i.e Masked Language Modelling (MLM) and Next Sentence Prediction. Masked language modeling is a fill-in-the-blank task, where a model uses the context words surrounding a [MASK] token to try to predict what the [MASK] word should be. Next Sentence Prediction is a classification task, in which the BERT model receives a pair of sentences as input and learns to predict if the second sentence in the pair is the subsequent sentence in the original document.",
"cite_spans": [
{
"start": 5,
"end": 26,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 153,
"end": 170,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF17"
},
{
"start": 188,
"end": 210,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 515,
"end": 521,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Pavlopoulos et al. 2017aintroduced an RNN based method for the comment moderation task on a Greek news sports portal. This method was improved by Pavlopoulos et al. (2017b) to include dataset specific user-embeddings, generated by accounting for the number of accepted and rejected comments of every user on the sports portal. Risch and Krestel (2018) have proposed a semi-automatic approach to comment moderation using a comment, user and article information to create a high recall logistic regression model. Large pre-trained BERT language models have been incorporated into many task specific architectures. Sentence-BERT (Reimers and Gurevych, 2019) is one such modification of the BERT network using Siamese and Triplet networks that is able to derive semantically meaningful sentence embeddings where semantically similar sentences are closer in the vector space. SentiBERT (Yin et al., 2020) is a BERT variant that effectively captures compositional sentiment semantics by incorporating BERT's contextualized representation with binary constituency parse tree to capture semantic composition.",
"cite_spans": [
{
"start": 146,
"end": 172,
"text": "Pavlopoulos et al. (2017b)",
"ref_id": "BIBREF7"
},
{
"start": 327,
"end": 351,
"text": "Risch and Krestel (2018)",
"ref_id": "BIBREF13"
},
{
"start": 626,
"end": 654,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comment Moderation Architectures and BERT Adaptations",
"sec_num": "2.1"
},
{
"text": "However, in the current paradigm, pre-trained language models are generalized and their portability to new conditions still remains an issue. To this end, work by Rietzler et al. (2020) and Xu et al. (2019) shows that in the aspect target sentiment task, the performance of models that are pre-trained on a general language corpus can be improved by fine tuning the language model on a domain specific corpus. We build on this and in Section 6 show that even in the comment moderation task on niche forums, the performance of models that are pre-trained on a German general language corpus can be improved by finetuning the language model on each specific forum topic.",
"cite_spans": [
{
"start": 163,
"end": 185,
"text": "Rietzler et al. (2020)",
"ref_id": "BIBREF12"
},
{
"start": 190,
"end": 206,
"text": "Xu et al. (2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comment Moderation Architectures and BERT Adaptations",
"sec_num": "2.1"
},
{
"text": "To verify the topic adaptation capabilities in German news forum datasets, we procured the NDR dataset 1 which consists of almost one million labelled user comments and their adjoining articles from the NDR news website. This dataset can be obtained directly from NDR for academic and research use. To evaluate the performance of our forumBERT architecture on an already existing dataset, we use the One Million Posts Dataset (Schabus et al., 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "The NDR dataset consists of a collection of 998,158 labelled comments on 65,261 articles on the NDR website. All comments were collected between five and a half year span from 2014-05-09 to 2019-12-12. The dataset consists of the following attributes for every comment: On average the length of a comment on the NDR dataset is 59.15 words. The quartile comment lengths are shown in Table 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 382,
"end": 389,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "NDR Dataset",
"sec_num": "3.1"
},
{
"text": "News datasets are very general in nature where discussions range from politics, sports to technology and scientific news. Therefore, we used the URL attribute to segment the entire dataset into different topics for topic adaptation. Specifically, by splicing the URL attribute, topic information was obtained for each comment. For example, in \"http://relaunch.ndr.de/sport/handball/bundesliga/\" the url contains the topic of the article, which in this case is sport. This is used to segment the entire dataset into topics. The number of comments per topic are shown in Table 2 . We applied topic adaptation (Rietzler et al., 2020) to two topics, \"sport\" and \"kultur\" (Culture), as both had among the most labelled training datapoints, as shown in Table 2 ). \"Nachrichten\" (News) is too general to be considered a forum topic and thus was omitted.",
"cite_spans": [
{
"start": 607,
"end": 630,
"text": "(Rietzler et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 569,
"end": 576,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 747,
"end": 754,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Topic Segmentation",
"sec_num": "3.1.1"
},
{
"text": "The One Million Posts dataset (OMP Schabus et al. (2017)) contains a selection of user comments posted to the Austrian Newspaper website \"Der Standard\". The comments have been selected from a 12 month time span between 2015-06-01 and 2016-05-31. There are 11,773 freely labelled posts on nine categories (not all labelled comments are labelled in every category) and 1,000,000 unlabelled posts in the data set. The amount of labelled data for each of the nine categories has been mentioned in Table 3 : number of labelled examples in each category in the OMP dataset (Schabus et al., 2017) 4 Methodology",
"cite_spans": [
{
"start": 567,
"end": 589,
"text": "(Schabus et al., 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 493,
"end": 500,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "One Million Posts (OMP)",
"sec_num": "3.2"
},
{
"text": "This section presents forumBERT, which is an extension of BERT for contextual classification tasks like general comment moderation task. We use a German language pre-trained BERT language model as a basis and approach this task using a three-step procedure. In the first step we finetune the pre-trained weights of the language model in a self-supervised way on a topic-specific corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "One Million Posts (OMP)",
"sec_num": "3.2"
},
{
"text": "In the second step we incorporate this finetuned language model into the forumBERT architecture. The final step is the supervised training of forum-BERT for the online/offline classification end-task. A schema for this process is depicted in Figure 2 In the following subsections, we discuss how we finetune the BERT language model and then the forumBERT architecture.",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 250,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "One Million Posts (OMP)",
"sec_num": "3.2"
},
{
"text": "To create our forumBERT model, our first step deals with finetuning a pretrained BERT language model using a topic specfic corpora. As described in Section 3.1.1 we split the NDR dataset into multiple topics. We adopt post-training of BERT (Xu et al., are beyond word level. This is important since, at a high level we wish to generate similar embeddings for comments that are in the same context as it's adjoining article's context. Finetuning the language model helps mitigate the problem of having less labelled data, which is the case in many online forums. This finetuned language model is then incorporated into the forumBERT architecture.",
"cite_spans": [
{
"start": 240,
"end": 251,
"text": "(Xu et al.,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT: Language Model Finetuning and Topic Adaptation",
"sec_num": "4.1"
},
{
"text": "Other than using the topic adapted BERT language model to create the forumBERT model, we also investigate the limitations of language model finetuning for the comment moderation task through two tasks described in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT: Language Model Finetuning and Topic Adaptation",
"sec_num": "4.1"
},
{
"text": "forumBERT is an extension of BERT for topicknowledge learning and forum-comment classification. The model must be able to compare the article and the comment on the article to determine its quality and relevance on the forum. Inappropriate and discriminatory comments must be removed from the forum irrespective of the corresponding articles, but the model must also remove comments that are off-topic/irrelevant and digress too far from the topic of the article. To achieve this we use the forumBERT architecture. We adapt the finetuned BERT BASE model for forum comment classification by using two finetuned BERT BASE models, one which takes in as input the headline of the article and another which takes the comment on the corresponding article as input. To mitigate the problem of a parameter explosion be- cause of using 2 BERT models and to add implicit regularization we share the weight between the two BERT models (shown in Figure 3 ). We follow Devlin et al. (2019) and consider the final hidden state corresponding to the [CLS] input token for both the BERT models. The pair of article and comment representations thus obtained are both of dimensions 768 \u00d7 1. The pair of embeddings are concatenated at the output of the BERT model (late fusion). Late fusion is preferred rather than concatenating the input tokens and passing them through the network, to allow the network to fully separate out the differences between the article and the comment. The dimensions of the concatenated vector is 1536 \u00d7 1. The fused vector is then passed through 2 fully connected layers with weights W t \u2208 R 2n\u00d7n and W t \u2208 R n\u00d7k respectively, where n is the dimension of the comment/headline embedding (n = 768) and k is the number of labels (k = 2). A softmax function is applied to the final k length vector.",
"cite_spans": [
{
"start": 956,
"end": 976,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 934,
"end": 942,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "forumBERT: A Weight Shared BERT Model",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "out = softmax(W t (W t (x)))",
"eq_num": "(1)"
}
],
"section": "forumBERT: A Weight Shared BERT Model",
"sec_num": "4.2"
},
{
"text": "Here, x represents the fused representation vector. We optimize the cross-entropy loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "forumBERT: A Weight Shared BERT Model",
"sec_num": "4.2"
},
{
"text": "As a base for all our experiments we use the BERT BASE model which consists of 12 layers (transformer blocks), 12 attention heads 768 hidden dimensions per token amounting to a total of 110 million parameters. The parameters of this model are initialized using bert-base-germancased 2 , which has been pretrained on the German 2 https://huggingface.co/ bert-base-german-cased Wikipedia Dump (6 GB), the German OpenLegal-Data dump (2.4 GB) and German news articles (3.6 GB) and released by deepset.ai 3 . For the BERT language model finetuning we use 32 bit floating point computations using the Adam optimizer (Kingma and Ba, 2015). The batchsize is set to 8 while the learning rate is set to 3 \u2022 10 \u22125 . The maximum input sequence length is set to 512 tokens, which amounts to about 11 sentences per sequence on average. For all experiments except Experiment 6.1 we use a forumBERT model in which we integrate a topic adapted BERT language model which is trained for 13 epochs on the entire topic with a learning rate of 3 \u2022 10 \u22125 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "For the down-stream online/offline classification task we use 32 bit floating point computations and the Adam optimizer. The models are trained for 7 epochs, with a learning rate of 2 \u2022 10 \u22126 for the two epochs and 6.31 \u2022 10 \u22127 for the remaining 5 epochs. The validation accuracy converges after about 3 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "For all experiments and results on the NDR dataset, we split the topic dataset in a 9:1 ratio. The larger portion of the dataset is used for language modelling and for training on downstream tasks and the smaller portion is used only for testing on downstream tasks. Table 4 : Results of the comment moderation task on the entire NDR dataset (without any topic segmentation). Precision, Recall, F1-score are all computed on the minority class (offline).",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 274,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "For the comment moderation task, we compare the performance of forumBERT with following baseline models trained on the NDR sport and kulur topic datasets: 1) Logistic regression on count vectorizer (BOW model); 2) logistic regression on doc2vec 4 representation (D2V model); 3) 3 layer DNN (dense neural network) (3DNN model) built Table 5 : Comment moderation task results on the NDR sport topic dataset and the culture topic dataset. The results have been computed for three quantities of uniformly sampled training examples with the first two being 1024 and 8192. The final quantity is all training comments from that particular topic. Precision, recall and F1-score are computed on the minority class (offline).",
"cite_spans": [],
"ref_spans": [
{
"start": 332,
"end": 339,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comment Moderation Task on NDR Dataset",
"sec_num": "5.1"
},
{
"text": "on doc2vec representations; 4) two BERT models. For all models other than forumBERT and a BERT model, contextualized input of the form \"TITLE [title] COMMENT [comment]\", is provided as input. To test the importance of providing context, we also train a BERT model using only comment text as input. We report performance measures on Table 5 . fo-rumBERT significantly outperforms all other models and has the highest F1 scores in both the sports and kultur topic datasets, even in few shot conditions (1024/8192 training examples). From this table, it can be seen that our approach significantly outperforms the standard BERT model, improving the F1 scores from 0.475 to 0.513 (8% increase) in the sports topic and an improvement from 0.465 to 0.490 (a 5.3% increase) in the kultur dataset. Also if we compare forumBERT to a standard BERT model with only comment input the F1 scores increase from 0.452 to 0.513 (a 13.4% performance gain) on the sport topic and an improvement from 0.475 to 0.490 (a 3.15% gain). Table 4 represents the effectiveness of the design architecture of the forumBERT model. The fo-rumBERT model considered here uses a pretrained 2014) was first trained on the NDR dataset, prior to training any models for the comment moderation task.",
"cite_spans": [],
"ref_spans": [
{
"start": 332,
"end": 339,
"text": "Table 5",
"ref_id": null
},
{
"start": 1012,
"end": 1019,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comment Moderation Task on NDR Dataset",
"sec_num": "5.1"
},
{
"text": "BERT language model without performing topic adaptation. We see that forumBERT outperforms all other methods, giving the best recall value, F1 score and the best accuracy on the entire dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comment Moderation Task on NDR Dataset",
"sec_num": "5.1"
},
{
"text": "We also compare the performance of: 1) forum-BERT; 2) BERT with contextualized input 3) BERT without contextualized input; 4) the baselines reported in Schabus et al. (2017) ; 5) advanced baseline for doc2vec (Le and Mikolov, 2014 ) (D2V) vector representation and a support vector machine (Cortes and Vapnik, 1995) with Radial Basis Function (RBF) kernel for classification as reported in Schabus and Skowron (2018) . To compare with the published results, all results have been computed using stratified 10-fold cross validation. The fo-rumBERT model considered here uses a pretrained BERT language model without topic adaptation. The results for each category are reported in Table 6 .",
"cite_spans": [
{
"start": 152,
"end": 173,
"text": "Schabus et al. (2017)",
"ref_id": "BIBREF15"
},
{
"start": 209,
"end": 230,
"text": "(Le and Mikolov, 2014",
"ref_id": "BIBREF3"
},
{
"start": 290,
"end": 315,
"text": "(Cortes and Vapnik, 1995)",
"ref_id": "BIBREF0"
},
{
"start": 390,
"end": 416,
"text": "Schabus and Skowron (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 679,
"end": 687,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classification on the OMP Dataset",
"sec_num": "5.2"
},
{
"text": "From Table 6 , it can be seen that for categories that do not require additional context from the article (i.e Sentiment Negative and Discriminating) \"BERT with only input comment text\" performs among the best. Providing contextualized input in the form of article title and comment dilutes the information input to the model leading to worse predictions.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classification on the OMP Dataset",
"sec_num": "5.2"
},
{
"text": "For categories that require contextualized input (i.e offtopic, inappropriate, Possibly Feedback and Personal Stories) it can be seen that \"BERT with contextualized inputs\" gives best results and slightly outperforms forumBERT in almost all categories to establish the state of the art results. Upon further investigation, we found that 10 articles account for a majority of the annotated comments in OMP. More precisely, 10 articles are the source of 72.1% of all \"OffTopic\" and \"Inappropriate\" annotated comments, 58.3% of all \"Personal Stories\" annotated comments and 45.1% of all \"Possibly Feedback\" comments. Without diversity in the article input to the forumBERT model, it tends to perform slightly worse than BERT. This was not the case with the NDR dataset, where there was enough diversity in the articles (65,261 articles) to promote better classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification on the OMP Dataset",
"sec_num": "5.2"
},
{
"text": "Nonetheless forumBERT exceeds all baseline and advanced baseline results and still offers competitive results on the OMP dataset. Figure 4 : Absolute accuracy percentage improvements on the downstream offline/online classification task as a function of the number of training sentences the BERT language model was fine-tuned on. The and symbols represent the average over 6 runs of finetuning and classification on the \"Sport\" and \"Kultur\" topics of the NDR dataset (Section 3.1) respectively. The 'x' and the '+' represent individual runs. The filled-in portions represent the standard deviation over these 6 runs (\u00b5 \u00b1 \u03c3). The absolute accuracy improvements are measured from 85.94% for the \"Sport\" topic and 81.64% for the \"Kultur\" topic. Table 6 : Classification results for multiple categories on the OMP dataset (Schabus et al., 2017) . Precision, Recall and F1-score have been computed for the minority class for each category.",
"cite_spans": [
{
"start": 817,
"end": 839,
"text": "(Schabus et al., 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 130,
"end": 138,
"text": "Figure 4",
"ref_id": null
},
{
"start": 741,
"end": 748,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classification on the OMP Dataset",
"sec_num": "5.2"
},
{
"text": "We aim to answer the following research questions through our experiments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Q1. How does the number of training iterations in the BERT language model finetuning stage influence the general comment moderation endtask performance on German topic forum datasets?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Q2. What is the influence of topic adaptation on the comment moderation endtask as a function of labelled endtask training examples?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "To answer Q1, we first split the topic datasets into a 9:1 ratio. The larger portion is used for BERT language model finetuning (topic adaptation) and the remaining is used for online/offline classification after every epoch of the language model finetuning. The results are shown in Figure 4 . Figure 4 and Table 2 empirically show that BERT is capable of learning topic specific forum comment knowledge even with less than 100,000 unlabelled training examples. We trained the BERT language model for 15 epochs individually on the sport and culture topic. We also infer that topic based BERT language model finetuning improves the general downstream offline/online task. We see that the performance improves immediately in the case of the more specific sports topic, whereas for the more general Figure 5 : Average online/offline classification F1 score (for the minority \"offline\" class) computed on the sports topic using a pretrained forumBERT model (using bert-base-german-cased) and a sports topic adapted forumBERT model as a function of the number of downstream classification examples. The x-axis is represented on a log 10 scale. The and symbols represent the average over 3 runs of online/offline classification on the sport topic of the NDR dataset (Section 3.1). The 'x' and '+' markers represent the individual runs. The filled-in portions represent the standard deviation over the 3 runs (\u00b5 \u00b1 \u03c3) culture topic, initially downstream classification performs worse (till 100,000 training sentences), but starts to see a steady gain in performance as it is trained after training on 150,000 sentences. Due to high variance in results, we average the results of 6 runs on each topic dataset and measure and plot the standard deviation to measure the improvements in performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 292,
"text": "Figure 4",
"ref_id": null
},
{
"start": 295,
"end": 303,
"text": "Figure 4",
"ref_id": null
},
{
"start": 308,
"end": 315,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 797,
"end": 805,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topic Adaptation",
"sec_num": "6.1"
},
{
"text": "To test the effectiveness of topic adaptation and answer Q2, we modelled the following experiment. We trained a pretrained forumBERT model and a sports topic-adapted forumBERT model on the comment moderation endtask using varying number of labelled endtask examples. Due to high variance in few shot results we average the results over 3 runs and measure and plot the standard deviation to generate reliable insights. The results of our experiment are shown in Figure 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 461,
"end": 469,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effectiveness of Topic Adaptation",
"sec_num": "6.2"
},
{
"text": "From Figure 5 we see that the pretrained fo-rumBERT model slightly outperforms the topicadapted forumBERT model in very few shot learning situations (< 300 training examples). However, it can be seen that in the range of 315-1000 labelled training examples, the topic-adapted forumBERT model performs as well as the pretrained forum-BERT model. Beyond this (> 1000 labelled train-ing examples), the performance of topic adapted forumBERT clearly exceeds the pretrained forum-BERT without topic adaptation. We also observe that the performance of both models starts converging beyond 10000 training examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 13,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effectiveness of Topic Adaptation",
"sec_num": "6.2"
},
{
"text": "From this experiment, we conclude that the effectiveness of topic adaptation reduces as the number of labelled training examples increase in the downstream task since labelled training examples consist of both task information and topic information, they provide much richer information to the model. As our experiment shows, with more than 10000 labelled training examples the advantage of using a topic adapted model diminishes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of Topic Adaptation",
"sec_num": "6.2"
},
{
"text": "In this paper, we introduced forumBERT, a simple architecture designed to determine comment's relevance in a discourse using 2 weight shared BERT models and a late fusion technique on BERT comment and article representations. Also, to mitigate the problem of portability of large NLP language models to niche language domains (in our case small news forums), we adopted a topic adaptation technique to learn better BERT representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We empirically showed that forumBERT outperforms all other baseline models on the NDR dataset. Our adaptation significantly outperforms the standard BERT model, improving the F1 scores from 0.475 to 0.513 (an 8% relative increase) on the sports topic dataset and an F1 score improvement from 0.465 to 0.490 (a 5.3% relative increase) on the culture topic dataset. The model also outperforms all existing advanced baseline results on the OMP dataset. Further analysis also shows the importance of topic adaptation as a function of labelled training examples. We would like to extend the application of forumBERT to other NLP tasks applications involving context dependent classification. Our implementation uses PyTorch (Paszke et al., 2019) and is publicly available. 5",
"cite_spans": [
{
"start": 719,
"end": 740,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://www.ndr.de/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://deepset.ai/german-bert 4 The doc2vec document embedding(Le and Mikolov,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See https://github.com/ayushyadav99/ forumBERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgments. This work was partly funded by Hamburg's ahoi.digital program in the Forum 4.0 project. We would also like to thank German broadcaster Norddeutscher Rundfunk (NDR) for giving us access to an extensive collection of moderated NDR.de user comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Support-vector networks",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine learning",
"volume": "20",
"issue": "3",
"pages": "273--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine learning, 20(3):273-297.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Confer- ence on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 31st International Conference on International Conference on Machine Learning",
"volume": "32",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed represen- tations of sentences and documents. In Proceedings of the 31st International Conference on International Confer- ence on Machine Learning -Volume 32, ICML'14, pages 1188-1196, Beijing, China.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Wein- berger, editors, Advances in Neural Information Processing Systems 26, pages 3111-3119. Lake Tahoe, Nevada, USA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Rai- son, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learn- ing library. In Advances in Neural Information Processing Systems 32, pages 8024-8035. Vancouver, BC, Canada.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Prodromos Malakasiotis, and Ion Androutsopoulos",
"authors": [
{
"first": "John",
"middle": [],
"last": "Pavlopoulos",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "25--35",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3004"
]
},
"num": null,
"urls": [],
"raw_text": "John Pavlopoulos, Prodromos Malakasiotis, and Ion Androut- sopoulos. 2017a. Deep learning for user comment mod- eration. In Proceedings of the First Workshop on Abusive Language Online, pages 25-35, Vancouver, BC, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improved abusive comment moderation with user embeddings",
"authors": [
{
"first": "John",
"middle": [],
"last": "Pavlopoulos",
"suffix": ""
},
{
"first": "Prodromos",
"middle": [],
"last": "Malakasiotis",
"suffix": ""
},
{
"first": "Juli",
"middle": [],
"last": "Bakagianni",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 EMNLP Workshop: Natural Language Processing meets Journalism",
"volume": "",
"issue": "",
"pages": "51--55",
"other_ids": {
"DOI": [
"10.18653/v1/W17-4209"
]
},
"num": null,
"urls": [],
"raw_text": "John Pavlopoulos, Prodromos Malakasiotis, Juli Bakagianni, and Ion Androutsopoulos. 2017b. Improved abusive com- ment moderation with user embeddings. In Proceedings of the 2017 EMNLP Workshop: Natural Language Processing meets Journalism, pages 51-55, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532- 1543, Doha, Qatar.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gard- ner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Pro- ceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERT-networks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In Proceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspecttarget sentiment classification",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Rietzler",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Stabinger",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Opitz",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Engl",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4933--4941",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Ste- fan Engl. 2020. Adapt or get left behind: Domain adapta- tion through BERT language model finetuning for aspect- target sentiment classification. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 4933-4941, Marseille, France.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Delete or not delete? semi-automatic comment moderation for the newsroom",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Risch",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Krestel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "166--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julian Risch and Ralf Krestel. 2018. Delete or not delete? semi-automatic comment moderation for the newsroom. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 166-176, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Academicindustrial perspective on the development and deployment of a moderation system for a newspaper website",
"authors": [
{
"first": "Dietmar",
"middle": [],
"last": "Schabus",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Skowron",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dietmar Schabus and Marcin Skowron. 2018. Academic- industrial perspective on the development and deploy- ment of a moderation system for a newspaper website. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. European Languages Resources Asso- ciation (ELRA).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "One million posts: A data set of german online discussions",
"authors": [
{
"first": "Dietmar",
"middle": [],
"last": "Schabus",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Skowron",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Trapp",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)",
"volume": "",
"issue": "",
"pages": "1241--1244",
"other_ids": {
"DOI": [
"10.1145/3077136.3080711"
]
},
"num": null,
"urls": [],
"raw_text": "Dietmar Schabus, Marcin Skowron, and Martin Trapp. 2017. One million posts: A data set of german online discus- sions. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pages 1241-1244, Tokyo, Japan.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkor- eit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vish- wanathan, and R. Garnett, editors, Advances in Neural In- formation Processing Systems 30, pages 5998-6008. Long Beach, CA, USA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc Le, Mo- hammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, and Jeffrey Dean. 2016. Google's neural ma- chine translation system: Bridging the gap between human and machine translation.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BERT post-training for review reading comprehension and aspectbased sentiment analysis",
"authors": [
{
"first": "Hu",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2324--2335",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1242"
]
},
"num": null,
"urls": [],
"raw_text": "Hu Xu, Bing Liu, Lei Shu, and Philip Yu. 2019. BERT post-training for review reading comprehension and aspect- based sentiment analysis. In Proceedings of the 2019 Conference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2324-2335, Minneapolis, Minnesota. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "XLNet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language under- standing. In Advances in Neural Information Processing Systems 32, pages 5753-5763. Vancouver, BC, Canada.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "SentiBERT: A transferable transformer-based architecture for compositional sentiment semantics",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Da Yin",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.341"
]
},
"num": null,
"urls": [],
"raw_text": "Da Yin, Tao Meng, and Kai-Wei Chang. 2020. SentiBERT: A transferable transformer-based architecture for compo- sitional sentiment semantics. In Proceedings of the 58th",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "3695--3706",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Lin- guistics, pages 3695-3706, Online.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Headline: The title of the article \u2022 URL: A URL to the article on the NDR website \u2022 Comment: The comment text \u2022 Date: The date of posting the comment \u2022 Label: A binary offline/online label, which represents the final status of the comment on the website. Offline labelled comments are considered non-desirable content on the forum."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Distribution of comment length on the NDR dataset (clipped to a maximum comment length of 250 words.)"
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Schema diagram for the construction of fo-rumBERT."
},
"FIGREF4": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Figure 3: forumBERT architecture"
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": ""
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": ""
},
"TABREF3": {
"content": "<table><tr><td>Sentiment Negative</td><td>1691</td><td>1908</td><td>47%</td></tr><tr><td>Sentiment Neutral</td><td>1865</td><td>1734</td><td>52%</td></tr><tr><td>Sentiment Positive</td><td>43</td><td>3556</td><td>1%</td></tr><tr><td>Off-Topic</td><td>580</td><td>3019</td><td>16%</td></tr><tr><td>Inappropriate</td><td>303</td><td>3296</td><td>8%</td></tr><tr><td>Discriminating</td><td>282</td><td>3317</td><td>8%</td></tr><tr><td>Possibly Feedback</td><td>1301</td><td>4737</td><td>22%</td></tr><tr><td>Personal Stories</td><td>1625</td><td>7711</td><td>17%</td></tr><tr><td>Arguments Used</td><td>1022</td><td>2577</td><td>28%</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "CategoryDoes Apply Does Not Apply Percentage"
},
"TABREF4": {
"content": "<table><tr><td>bert-base-german-cased</td></tr><tr><td>Finetune on</td></tr><tr><td>forum topic</td></tr><tr><td>Finetuned German BERT</td></tr><tr><td>Integrate finetuned BERT</td></tr><tr><td>model into forumBERT</td></tr><tr><td>forumBERT</td></tr><tr><td>Train forumBERT on</td></tr><tr><td>comment moderation task</td></tr><tr><td>Trained forumBERT</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "2019) on a topic dataset which is algorithmically the same as pretraining the model. The Masked Language Modelling task is used to learn topic knowledge and remove any biases learnt from the pretraining datasets. Next Sentence Prediction helps BERT learn contextualized embeddings that"
},
"TABREF5": {
"content": "<table><tr><td colspan=\"2\">Meas. BOW</td><td>D2V</td><td>BERT</td><td>BERT</td><td>fBERT</td></tr><tr><td colspan=\"3\">com. titPrec. 0.65 0.60</td><td>0.73</td><td>0.71</td><td>0.698</td></tr><tr><td>Rec.</td><td>0.27</td><td>0.15</td><td>0.38</td><td>0.42</td><td>0.431</td></tr><tr><td>F1.</td><td>0.38</td><td>0.24</td><td>0.50</td><td>0.527</td><td>0.533</td></tr><tr><td>Acc.</td><td>0.786</td><td>0.767</td><td>0.810</td><td>0.814</td><td>0.819</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": ".+com. com. tit.+com. tit./com."
},
"TABREF7": {
"content": "<table><tr><td colspan=\"3\">Categ. Meas. BOW</td><td>D2V</td><td colspan=\"2\">LSTM BERT</td><td>BERT</td><td>fBERT</td></tr><tr><td colspan=\"4\">Prec. 0.552 com. topNeg. 0.621 Rec. 0.510 0.483</td><td>0.534 0.719</td><td>0.664 0.642</td><td>0.663 0.709</td><td>0.711 0.646</td></tr><tr><td/><td>F1</td><td>0.530</td><td>0.544</td><td>0.613</td><td>0.654</td><td>0.685</td><td>0.677</td></tr><tr><td/><td colspan=\"2\">Prec. 0.275</td><td>0.252</td><td>0.274</td><td>0.513</td><td>0.537</td><td>0.565</td></tr><tr><td>Offtop.</td><td>Rec.</td><td>0.237</td><td>0.453</td><td>0.263</td><td>0.253</td><td>0.337</td><td>0.272</td></tr><tr><td/><td>F1</td><td>0.255</td><td>0.324</td><td>0.268</td><td>0.339</td><td>0.415</td><td>0.368</td></tr><tr><td/><td colspan=\"2\">Prec. 0.162</td><td>0.143</td><td>0.196</td><td>0.360</td><td>0.411</td><td>0.346</td></tr><tr><td>Inappr</td><td>Rec.</td><td>0.111</td><td>0.412</td><td>0.108</td><td>0.188</td><td>0.147</td><td>0.178</td></tr><tr><td/><td>F1</td><td>0.132</td><td>0.212</td><td>0.140</td><td>0.247</td><td>0.217</td><td>0.235</td></tr><tr><td/><td colspan=\"2\">Prec. 0.184</td><td>0.154</td><td>0.113</td><td>0.368</td><td>0.325</td><td>0.304</td></tr><tr><td>Disc</td><td>Rec.</td><td>0.102</td><td>0.283</td><td>0.141</td><td>0.112</td><td>0.052</td><td>0.112</td></tr><tr><td/><td>F1</td><td>0.132</td><td>0.200</td><td>0.126</td><td>0.171</td><td>0.089</td><td>0.163</td></tr><tr><td/><td colspan=\"2\">Prec. 0.655</td><td>0.531</td><td>0.630</td><td>0.741</td><td>0.798</td><td>0.792</td></tr><tr><td>Feed.</td><td>Rec.</td><td>0.580</td><td>0.735</td><td>0.628</td><td>0.698</td><td>0.765</td><td>0.762</td></tr><tr><td/><td>F1</td><td>0.616</td><td>0.617</td><td>0.630</td><td>0.719</td><td>0.781</td><td>0.771</td></tr><tr><td/><td colspan=\"2\">Prec. 0.698</td><td>0.589</td><td>0.638</td><td>0.836</td><td>0.834</td><td>0.832</td></tr><tr><td>Pers.</td><td>Rec.</td><td>0.592</td><td>0.850</td><td>0.665</td><td>0.828</td><td>0.854</td><td>0.841</td></tr><tr><td/><td>F1</td><td>0.640</td><td>0.696</td><td>0.651</td><td>0.832</td><td>0.844</td><td>0.836</td></tr><tr><td/><td colspan=\"2\">Prec. 0.610</td><td>0.545</td><td>0.568</td><td>0.716</td><td>0.742</td><td>0.733</td></tr><tr><td>Arg.</td><td>Rec.</td><td>0.512</td><td>0.763</td><td>0.645</td><td>0.733</td><td>0.754</td><td>0.769</td></tr><tr><td/><td>F1</td><td>0.526</td><td>0.636</td><td>0.604</td><td>0.725</td><td>0.748</td><td>0.750</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": ".+com. com. com. tit.+com. tit./com."
}
}
}
}