|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T05:10:32.080718Z" |
|
}, |
|
"title": "Context Sensitivity Estimation in Toxicity Detection", |
|
"authors": [ |
|
{ |
|
"first": "Alexandros", |
|
"middle": [], |
|
"last": "Xenos", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Athens University of Economics and Business", |
|
"location": { |
|
"country": "Greece" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pavlopoulos", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Athens University of Economics and Business", |
|
"location": { |
|
"country": "Greece" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Ion", |
|
"middle": [], |
|
"last": "Androutsopoulos", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Athens University of Economics and Business", |
|
"location": { |
|
"country": "Greece" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "User posts whose perceived toxicity depends on the conversational context are rare in current toxicity detection datasets. Hence, toxicity detectors trained on current datasets will also disregard context, making the detection of context-sensitive toxicity a lot harder when it occurs. We constructed and publicly release a dataset of 10k posts with two kinds of toxicity labels per post, obtained from annotators who considered (i) both the current post and the previous one as context, or (ii) only the current post. We introduce a new task, context sensitivity estimation, which aims to identify posts whose perceived toxicity changes if the context (previous post) is also considered. Using the new dataset, we show that systems can be developed for this task. Such systems could be used to enhance toxicity detection datasets with more context-dependent posts, or to suggest when moderators should consider the parent posts, which may not always be necessary and may introduce an additional cost.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "User posts whose perceived toxicity depends on the conversational context are rare in current toxicity detection datasets. Hence, toxicity detectors trained on current datasets will also disregard context, making the detection of context-sensitive toxicity a lot harder when it occurs. We constructed and publicly release a dataset of 10k posts with two kinds of toxicity labels per post, obtained from annotators who considered (i) both the current post and the previous one as context, or (ii) only the current post. We introduce a new task, context sensitivity estimation, which aims to identify posts whose perceived toxicity changes if the context (previous post) is also considered. Using the new dataset, we show that systems can be developed for this task. Such systems could be used to enhance toxicity detection datasets with more context-dependent posts, or to suggest when moderators should consider the parent posts, which may not always be necessary and may introduce an additional cost.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Online fora are used to facilitate discussions, but hateful, insulting, identity-attacking, profane, or otherwise abusive posts may also occur. These posts are called toxic (Borkan et al., 2019) or abusive (Thylstrup and Waseem, 2020) , and systems detecting them (Waseem and Hovy, 2016; Pavlopoulos et al., 2017b; Badjatiya et al., 2017) are called toxicity (or abusive language) detection systems. What most of these systems have in common, besides aiming to promote healthy discussions online (Zhang et al., 2018) , is that they disregard the conversational context (e.g., the parent post in the discussion), making the detection of context-sensitive toxicity a lot harder. For instance, the post \"Keep the hell out\" may be considered as toxic by a moderator, if the previous (parent) post \"What was the title of that 'hell out' movie?\" is ignored. Although toxicity datasets that include conversational context have recently started to appear, in previous work we showed that context-sensitive posts are still too few in those datasets (Pavlopoulos et al., 2020) , which does not allow models to learn to detect context-dependent toxicity. In this work, we focus on this problem. We constructed and publicly release a context-aware dataset of 10k posts, each of which was annotated by raters who (i) considered the previous (parent) post as context, apart from the post being annotated (the target post), and by raters who (ii) were given only the target post, without context. 1 As a first step towards studying contextdependent toxicity, we limit the conversational context to the previous (parent) post of the thread, as in our previous work (Pavlopoulos et al., 2020) . We use the new dataset to study the nature of context sensitivity in toxicity detection, and we introduce a new task, context sensitivity estimation, which aims to identify posts whose perceived toxicity changes if the context (previous post) is also considered. Using the dataset, we also show that systems can be developed for the new task. Such systems could be used to enhance toxicity detection datasets with more context-dependent posts, or to suggest when moderators should consider the parent posts; the latter may not always be necessary and may also introduce additional cost.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 194, |
|
"text": "(Borkan et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 234, |
|
"text": "(Thylstrup and Waseem, 2020)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 287, |
|
"text": "(Waseem and Hovy, 2016;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 314, |
|
"text": "Pavlopoulos et al., 2017b;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 338, |
|
"text": "Badjatiya et al., 2017)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 516, |
|
"text": "(Zhang et al., 2018)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1040, |
|
"end": 1066, |
|
"text": "(Pavlopoulos et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1482, |
|
"end": 1483, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1649, |
|
"end": 1675, |
|
"text": "(Pavlopoulos et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To build the dataset of this work, we used the also publicly available Civil Comments (CC) dataset (Borkan et al., 2019) . CC was originally annotated by ten annotators per post, but the parent post (the previous post in the thread) was not shown to the annotators. We randomly sampled 10,000 CC posts and gave both the target and the parent post to the annotators. We call this new dataset Civil Comments in Context (CCC). Each CCC post was rated either as NON-TOXIC, UNSURE, TOXIC, or VERY TOXIC, as in the original CC dataset. We unified the latter two labels in both CC and CCC annotations to simplify the problem. To obtain the new in-context labels of CCC, we used the APPEN platform and five high accuracy annotators per post (annotators from zone 3, allowing adult and warned for explicit content), selected from 7 English speaking countries, namely: UK, Ireland, USA, Canada, New Zealand, South Africa, and Australia. 2 The free-marginal kappa (Randolph, 2010) of the CCC annotations is 83.93%, while the average (mean pairwise) percentage agreement is 92%. In only 71 posts (0.07%) an annotator said UNSURE, i.e., annotators were confident in their decisions most of the time. We exclude these 71 posts from our study, as they are too few. The average length of target posts in CCC is only slightly lower than that of parent posts. Fig. 1 shows this counting the length in characters, but the same holds when counting words (56.5 vs. 68.8 words on average). To obtain a single toxicity score per post, we calculated the percentage of the annotators who found the post to be insulting, profane, identity-attack, hateful, or toxic in another way (i.e., all toxicity sub-types provided by the annotators were collapsed to a single toxicity label). This is similar to arrangements in the work of Wulczyn et al. (2017) , who also found that training using the empirical distribution (over annotators) of the toxic labels (a continuous score per post) leads to better toxicity detection performance, compared to using labels reflecting the majority opinion of the raters (a binary label per post). See also Fornaciari et al. (2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 120, |
|
"text": "(Borkan et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 927, |
|
"end": 928, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 953, |
|
"end": 969, |
|
"text": "(Randolph, 2010)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1802, |
|
"end": 1823, |
|
"text": "Wulczyn et al. (2017)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 2111, |
|
"end": 2135, |
|
"text": "Fornaciari et al. (2021)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1342, |
|
"end": 1348, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The dataset", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Combined with the original (out of context) annotations of the 10k posts from CC, the new dataset (CCC) contains 10k posts for which both in-context (IC) and out-of-context (OC) labels are available. Figure 2 shows the number of posts (Y axis) per ground truth toxicity score (X axis). Orange represents the ground truth obtained by annotators who were provided with the parent post when rating (IC), while blue is for annotators who rated the post without context (OC). The vast majority of the posts were unanimously perceived as NON-TOXIC (0.0 toxicity), both by the OC and the IC coders. However, IC coders found fewer posts with toxicity greater than 0.2, compared to OC coders. This is consistent with the findings of our previous work (Pavlopoulos et al., 2020) , where we observed that when the parent post is provided, the majority of the annotators perceive fewer posts as toxic, compared to showing no context to the annotators. To study this further, in this work we compared the two scores (IC, OC) per post, as discussed below.", |
|
"cite_spans": [ |
|
{ |
|
"start": 742, |
|
"end": 768, |
|
"text": "(Pavlopoulos et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 208, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The dataset", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For each post p, we define s ic (p) to be the toxicity (fraction of coders who perceived the post as toxic) derived from the IC coders and s oc (p) to be the toxicity derived from the OC coders. Then, their difference is \u03b4(p) = s oc (p) \u2212 s ic (p). A positive \u03b4 means that raters who were not given the parent post perceived the target post as toxic more often than raters who were given the parent post. A negative \u03b4 means the opposite. Fig. 3 shows that \u03b4 is most often zero, but when the toxicity score changes, \u03b4 is most often positive, i.e., showing the context to the annotators reduces the perceived toxicity in most cases. In numbers, in 66.1% of the posts the toxicity score remained unchanged while out of the remaining 33.9%, in 9.6% it increased (960 posts) and in 24.2% it decreased (2,408) when context was provided. If we binarize the ground truth we get a similar trend, but with the toxicity of more posts remaining unchanged (i.e., 94.7%).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 438, |
|
"end": 444, |
|
"text": "Fig. 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The dataset", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "When counting the number of posts for which |\u03b4| exceeds a threshold t, called context-sensitive posts in Fig. 4 , we observe that as t increases, the number of context sensitive posts decreases. This means that clearly context sensitive posts (e.g., in an edge case, ones that all OC coders found as toxic while all IC coders found as non toxic) are rare. Some examples of target posts, along with their parent posts and \u03b4, are shown in Table 1. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 111, |
|
"text": "Fig. 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The dataset", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Initially, we used our dataset to experiment with existing toxicity detection systems, aiming to investigate if context-sensitive posts are more difficult to automatically classify correctly as toxic or nontoxic. Then, we trained new systems to solve a different task, that of estimating how sensitive the toxicity score of each post is to its parent post, i.e., to estimate the context sensitivity of a target post.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Study", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We employed the Perspective API toxicity detection system to classify CCC posts as toxic or not. 3 We either concatenate the parent post to the target one to allow the model to \"see\" the parent, or not. 4 Figure 5 shows the Mean Absolute Error (MAE) of Perspective, with and without the parent post concatenated, when evaluating on all the CCC posts (t = 0) and when evaluating on smaller subsets with increasingly context-sensitive posts (t > 0). In all cases, we use the in-context (IC) gold labels as the ground truth. The greater the sensitivity threshold t, the smaller the sample (Fig. 4 ). Figure 5 : Mean Absolute Error (Y-axis) when predicting toxicity for different context-sensitivity thresholds (t; X-axis). We applied Perspective to target posts alone (w/o) or concatenating the parent posts (w). Figure 5 shows that when we concatenate the parent to the target post (w), MAE is clearly smaller, provided that t \u2265 0.2. Hence, the benefits of integrating context in toxicity detection systems may be visible only in sufficiently context-sensitive subsets, like the ones we would obtain by evaluating (and training) on posts with t \u2265 0.2. By contrast, if no context-sensitivity threshold is imposed (t = 0) when constructing a dataset, the non-context sensitive posts (|\u03b4| = 0) dominate (Fig. 4) , hence adding context mechanisms to toxicity detectors has no visible effect in test scores. This explains related observations in our previous work (Pavlopoulos et al., 2020) , where we found that context-sensitive posts are too rare and, thus, context-aware models do not perform better on existing toxicity datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 98, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1457, |
|
"end": 1483, |
|
"text": "(Pavlopoulos et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 213, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 586, |
|
"end": 593, |
|
"text": "(Fig. 4", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 597, |
|
"end": 605, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 810, |
|
"end": 818, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1298, |
|
"end": 1306, |
|
"text": "(Fig. 4)", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Toxicity Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "It is worth observing that the more we move to the right of Fig. 5 , the higher the error for both Per- spective variants (with, without context). This is probably due to the fact that Perspective is trained on posts that have been rated by annotators who were not provided with the parent post (out of context; OC), whereas here we use the in-context (IC) annotations as ground truth. The greater the t in Fig. 5 , the larger the difference between the toxicity scores of OC and IC annotators, hence the larger the difference between the (OC) ground truth that Perspective saw and the ground truth that we use here (IC). Experimenting with artificial parent posts (long or short, toxic or not) confirmed that the error increases for context-sensitive posts. The solution to the problem of increasing error as context sensitivity increases (Fig. 5 ) would be to train toxicity detectors on datasets that are richer in context-sensitive posts. However, such posts are rare (Fig. 4) and thus, they are hard to collect and annotate. This observation motivated the experiments of the next section, where we train context-sensitivity detectors, which allow us to collect posts that are likely to be context-sensitive. These posts can then be used to train toxicity detectors on datasets richer in context-sensitive posts.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 66, |
|
"text": "Fig. 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 407, |
|
"end": 413, |
|
"text": "Fig. 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 840, |
|
"end": 847, |
|
"text": "(Fig. 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 972, |
|
"end": 980, |
|
"text": "(Fig. 4)", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Toxicity Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We trained and assessed four regressors on the new CCC dataset, to predict the context-sensitivity \u03b4. We used Linear Regression, Support Vector Regression, a Random Forest regressor, and a BERT-based (Devlin et al., 2019) regression model (BERTr). The first three regressors use TF-IDF features. In the case of BERTr, we add a feed-forward neural network (FFNN) on top of the top-level embedding of the [CLS] token. The FFNN consists of a dense layer (128 neurons) and a tanh activation function, followed by another dense layer. The last dense layer has a single output neuron, with no activation function, that produces the context sensitivity score. Preliminary experiments showed that adding simplistic context-mechanisms (e.g., concatenating the parent post) to the context sensitivity regressors does not lead to improvements. This may be due to the fact that it is often possible to decide if a post is context-sensitive or not (we do not score the toxicity of posts in this section) by considering only the target post without its parent (e.g., in responses like \"NO!!\"). Future work will investigate this hypothesis further by experimenting with more elaborate context-mechanisms. If the hypothesis is verified, manually annotating context-sensitivity (not toxicity) may also require only the target post. We used a train/validation/test split of 80/10/10, respectively, and we performed Monte Carlo 3fold Cross Validation. We used mean square error (MSE) as our loss function and early stopping with patience of 5 epochs. Table 2 presents the MSE and the mean absolute error (MAE) of all the models on the test set. Unsurprisingly, BERTr outperforms the rest of the models in MSE and MAE. Previous work (Wulczyn et al., 2017) reported that training toxicity regressors (based on the empirical distribution of codes) instead of classifiers (based on the majority of the codes) leads to improved classification results too, so we also computed classification results. For the latter results, we turned the ground truth probabilities of the test instances to binary labels by setting a threshold t (Section 2) and assigning the label 1 if \u03b4 > t and 0 otherwise. In this experiment, t was set to the sum of the standard error of mean (SEM) of the OC and IC raters for that specific post: t(p) = SEM oc (p) + SEM ic (p). By using this binary ground truth, AUPR and AUC ver-ified that BERTr outperforms the rest of the models, even when the models are used as classifiers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 221, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1713, |
|
"end": 1735, |
|
"text": "(Wulczyn et al., 2017)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1532, |
|
"end": 1539, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Context Sensitivity Estimation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Following the work of Borkan et al. (2019) , this work uses toxicity as an umbrella term for hateful, identity-attack, insulting, profane or posts that are toxic in another way. Toxicity detection is a popular task that has been addressed by machine learning approaches (Davidson et al., 2017; Waseem and Hovy, 2016; Djuric et al., 2015) , including deep learning approaches (Park and Fung, 2017; Pavlopoulos et al., 2017b,c; Chakrabarty et al., 2019; Badjatiya et al., 2017; Haddad et al., 2020; Ozler et al., 2020) . Despite the plethora of computational approaches, what most of these have in common is that they disregard context, such as the parent post in discussions. The reason for this weakness is that datasets are developed while annotators ignore the context (Nobata et al., 2016; Wulczyn et al., 2017; Waseem and Hovy, 2016) . Most of the datasets in the field are in English, but datasets in other languages have the same weakness (Pavlopoulos et al., 2017a; Mubarak et al., 2017; Chiril et al., 2020; Ibrohim and Budi, 2018; Ross et al., 2016; Wiegand et al., 2018) . We started to investigate context-sensitivity in toxicity detection in our previous work (Pavlopoulos et al., 2020) using existing toxicity detection datasets and a much smaller dataset (250 posts) we constructed with both IC and OC labels. Comparing to our previous work, here we constructed and released a much larger dataset (10k posts) with IC and OC labels, we introduced the new task of context-sensitivity estimation, and we reported experimental results indicating that the new task is feasible.", |
|
"cite_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 42, |
|
"text": "Borkan et al. (2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 293, |
|
"text": "(Davidson et al., 2017;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 316, |
|
"text": "Waseem and Hovy, 2016;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 337, |
|
"text": "Djuric et al., 2015)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 375, |
|
"end": 396, |
|
"text": "(Park and Fung, 2017;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 425, |
|
"text": "Pavlopoulos et al., 2017b,c;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 426, |
|
"end": 451, |
|
"text": "Chakrabarty et al., 2019;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 452, |
|
"end": 475, |
|
"text": "Badjatiya et al., 2017;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 476, |
|
"end": 496, |
|
"text": "Haddad et al., 2020;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 497, |
|
"end": 516, |
|
"text": "Ozler et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 771, |
|
"end": 792, |
|
"text": "(Nobata et al., 2016;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 793, |
|
"end": 814, |
|
"text": "Wulczyn et al., 2017;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 815, |
|
"end": 837, |
|
"text": "Waseem and Hovy, 2016)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 945, |
|
"end": 972, |
|
"text": "(Pavlopoulos et al., 2017a;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 973, |
|
"end": 994, |
|
"text": "Mubarak et al., 2017;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 995, |
|
"end": 1015, |
|
"text": "Chiril et al., 2020;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1016, |
|
"end": 1039, |
|
"text": "Ibrohim and Budi, 2018;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1040, |
|
"end": 1058, |
|
"text": "Ross et al., 2016;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1059, |
|
"end": 1080, |
|
"text": "Wiegand et al., 2018)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1172, |
|
"end": 1198, |
|
"text": "(Pavlopoulos et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We introduced the task of estimating the contextsensitivity of posts in toxicity detection, i.e., estimating the extent to which the perceived toxicity of a post depends on the conversational context. We constructed, presented, and release a new dataset that can be used to train and evaluate systems for the new task, where context is the previous post. Context-sensitivity estimation systems can be used to collect larger samples of context-sensitive posts, which is a prerequisite to train toxicity detectors to better handle context-sensitive posts. Contextsensitivity estimators can also be used to suggest when moderators should consider the context of a post, which is more costly and may not always be necessary. In future work, we hope to incorporate context mechanisms in toxicity detectors and train (and evaluate) them on datasets sufficiently rich in context-sensitive posts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The dataset is released under a CC0 licence. See http: //nlp.cs.aueb.gr/publications.html for the link to download it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We focused on known English-speaking countries. The most common country of origin was USA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.perspectiveapi.com4 We are investigating better context-aware models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank L. Dixon and J. Sorensen for their continuous assistance and advice. This research was funded in part by a Google Research Award.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Deep learning for hate speech detection in tweets", |
|
"authors": [ |
|
{ |
|
"first": "Pinkesh", |
|
"middle": [], |
|
"last": "Badjatiya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shashank", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manish", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vasudeva", |
|
"middle": [], |
|
"last": "Varma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 26th International Conference on World Wide Web Companion -WWW '17 Companion", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3041021.3054223" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pinkesh Badjatiya, Shashank Gupta, Manish Gupta, and Vasudeva Varma. 2017. Deep learning for hate speech detection in tweets. Proceedings of the 26th International Conference on World Wide Web Com- panion -WWW '17 Companion.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Nuanced metrics for measuring unintended bias with real data for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Borkan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Dixon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Sorensen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nithum", |
|
"middle": [], |
|
"last": "Thain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucy", |
|
"middle": [], |
|
"last": "Vasserman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "WWW", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "491--500", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2019. Nuanced metrics for measuring unintended bias with real data for text classification. In WWW, pages 491-500, San Fran- cisco, USA.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Pay \"attention\" to your context when classifying abusive language", |
|
"authors": [ |
|
{ |
|
"first": "Tuhin", |
|
"middle": [], |
|
"last": "Chakrabarty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilol", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Smaranda", |
|
"middle": [], |
|
"last": "Muresan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Third Workshop on Abusive Language Online", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "70--79", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-3508" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tuhin Chakrabarty, Kilol Gupta, and Smaranda Mure- san. 2019. Pay \"attention\" to your context when classifying abusive language. In Proceedings of the Third Workshop on Abusive Language Online, pages 70-79, Florence, Italy. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "An annotated corpus for sexism detection in French tweets", |
|
"authors": [ |
|
{ |
|
"first": "Patricia", |
|
"middle": [], |
|
"last": "Chiril", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V\u00e9ronique", |
|
"middle": [], |
|
"last": "Moriceau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Farah", |
|
"middle": [], |
|
"last": "Benamara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alda", |
|
"middle": [], |
|
"last": "Mari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gloria", |
|
"middle": [], |
|
"last": "Origgi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marl\u00e8ne", |
|
"middle": [], |
|
"last": "Coulomb-Gully", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1397--1403", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Patricia Chiril, V\u00e9ronique Moriceau, Farah Benamara, Alda Mari, Gloria Origgi, and Marl\u00e8ne Coulomb- Gully. 2020. An annotated corpus for sexism de- tection in French tweets. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 1397-1403, Marseille, France. Euro- pean Language Resources Association.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automated hate speech detection and the problem of offensive language", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Davidson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dana", |
|
"middle": [], |
|
"last": "Warmsley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Macy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ingmar", |
|
"middle": [], |
|
"last": "Weber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the International AAAI Conference on Web and Social Media", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. Proceedings of the International AAAI Conference on Web and Social Media, 11(1).", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Hate speech detection with comment embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Nemanja", |
|
"middle": [], |
|
"last": "Djuric", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robin", |
|
"middle": [], |
|
"last": "Morris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihajlo", |
|
"middle": [], |
|
"last": "Grbovic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vladan", |
|
"middle": [], |
|
"last": "Radosavljevic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Narayan", |
|
"middle": [], |
|
"last": "Bhamidipati", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 24th International Conference on World Wide Web, WWW '15 Companion", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "29--30", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2740908.2742760" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Gr- bovic, Vladan Radosavljevic, and Narayan Bhamidi- pati. 2015. Hate speech detection with comment em- beddings. In Proceedings of the 24th International Conference on World Wide Web, WWW '15 Com- panion, page 29-30, New York, NY, USA. Associa- tion for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Beyond black & white: Leveraging annotator disagreement via soft-label multi-task learning", |
|
"authors": [ |
|
{ |
|
"first": "Tommaso", |
|
"middle": [], |
|
"last": "Fornaciari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Uma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silviu", |
|
"middle": [], |
|
"last": "Paun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2591--2597", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.naacl-main.204" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tommaso Fornaciari, Alexandra Uma, Silviu Paun, Barbara Plank, Dirk Hovy, and Massimo Poesio. 2021. Beyond black & white: Leveraging annota- tor disagreement via soft-label multi-task learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 2591-2597, Online. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Arabic offensive language detection with attention-based deep neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Bushr", |
|
"middle": [], |
|
"last": "Haddad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zoher", |
|
"middle": [], |
|
"last": "Orabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anas", |
|
"middle": [], |
|
"last": "Al-Abood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nada", |
|
"middle": [], |
|
"last": "Ghneim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "76--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bushr Haddad, Zoher Orabe, Anas Al-Abood, and Nada Ghneim. 2020. Arabic offensive language de- tection with attention-based deep neural networks. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 76-81, Marseille, France. European Language Resource As- sociation.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A dataset and preliminaries study for abusive language detection in indonesian social media", |
|
"authors": [ |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Okky Ibrohim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Indra", |
|
"middle": [], |
|
"last": "Budi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "The 3rd International Conference on Computer Science and Computational Intelligence (ICCSCI 2018) : Empowering Smart Technology in Digital Era for a Better Life", |
|
"volume": "135", |
|
"issue": "", |
|
"pages": "222--229", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.procs.2018.08.169" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Muhammad Okky Ibrohim and Indra Budi. 2018. A dataset and preliminaries study for abusive language detection in indonesian social media. Procedia Computer Science, 135:222-229. The 3rd Interna- tional Conference on Computer Science and Compu- tational Intelligence (ICCSCI 2018) : Empowering Smart Technology in Digital Era for a Better Life.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Abusive language detection on Arabic social media", |
|
"authors": [ |
|
{ |
|
"first": "Hamdy", |
|
"middle": [], |
|
"last": "Mubarak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Walid", |
|
"middle": [], |
|
"last": "Magdy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the First Workshop on Abusive Language Online", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "52--56", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-3008" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hamdy Mubarak, Kareem Darwish, and Walid Magdy. 2017. Abusive language detection on Arabic social media. In Proceedings of the First Workshop on Abu- sive Language Online, pages 52-56, Vancouver, BC, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Abusive language detection in online user content", |
|
"authors": [ |
|
{ |
|
"first": "Chikashi", |
|
"middle": [], |
|
"last": "Nobata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Tetreault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Achint", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yashar", |
|
"middle": [], |
|
"last": "Mehdad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 25th International Conference on World Wide Web, WWW '16", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "145--153", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2872427.2883062" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive lan- guage detection in online user content. In Proceed- ings of the 25th International Conference on World Wide Web, WWW '16, page 145-153, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Fine-tuning for multi-domain and multi-label uncivil language detection", |
|
"authors": [ |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Kadir Bulut Ozler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Kenski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yotam", |
|
"middle": [], |
|
"last": "Rains", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Shmargad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Coe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Fourth Workshop on Online Abuse and Harms", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "28--33", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.alw-1.4" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kadir Bulut Ozler, Kate Kenski, Steve Rains, Yotam Shmargad, Kevin Coe, and Steven Bethard. 2020. Fine-tuning for multi-domain and multi-label un- civil language detection. In Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 28-33, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "One-step and two-step classification for abusive language detection on Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Ji", |
|
"middle": [], |
|
"last": "Ho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Park", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascale", |
|
"middle": [], |
|
"last": "Fung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the First Workshop on Abusive Language Online", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--45", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-3006" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ji Ho Park and Pascale Fung. 2017. One-step and two-step classification for abusive language detec- tion on Twitter. In Proceedings of the First Work- shop on Abusive Language Online, pages 41-45, Vancouver, BC, Canada. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Deep learning for user comment moderation", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pavlopoulos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the First Workshop on Abusive Language Online", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--35", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-3004" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Pavlopoulos, Prodromos Malakasiotis, and Ion Androutsopoulos. 2017a. Deep learning for user comment moderation. In Proceedings of the First Workshop on Abusive Language Online, pages 25- 35, Vancouver, BC, Canada. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Deeper attention to abusive user content moderation", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pavlopoulos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1125--1135", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1117" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Pavlopoulos, Prodromos Malakasiotis, and Ion Androutsopoulos. 2017b. Deeper attention to abu- sive user content moderation. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 1125-1135, Copen- hagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Improved abusive comment moderation with user embeddings", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pavlopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prodromos", |
|
"middle": [], |
|
"last": "Malakasiotis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juli", |
|
"middle": [], |
|
"last": "Bakagianni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ion", |
|
"middle": [], |
|
"last": "Androutsopoulos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 EMNLP Workshop: Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "51--55", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-4209" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Pavlopoulos, Prodromos Malakasiotis, Juli Bak- agianni, and Ion Androutsopoulos. 2017c. Im- proved abusive comment moderation with user em- beddings. In Proceedings of the 2017 EMNLP Work- shop: Natural Language Processing meets Journal- ism, pages 51-55, Copenhagen, Denmark. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Nithum Thain, and Ion Androutsopoulos. 2020. Toxicity detection: Does context really matter?", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pavlopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Sorensen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Dixon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Pavlopoulos, Jeffrey Sorensen, Lucas Dixon, Nithum Thain, and Ion Androutsopoulos. 2020. Toxicity detection: Does context really matter?", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Free-marginal multirater kappa (multirater \u03bafree): An alternative to fleiss fixed-marginal multirater kappa", |
|
"authors": [ |
|
{ |
|
"first": "Justus", |
|
"middle": [], |
|
"last": "Randolph", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Justus Randolph. 2010. Free-marginal multirater kappa (multirater \u03bafree): An alternative to fleiss fixed-marginal multirater kappa. volume 4.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis", |
|
"authors": [ |
|
{ |
|
"first": "Bj\u00f6rn", |
|
"middle": [], |
|
"last": "Ross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Rist", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillermo", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Cabrera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nils", |
|
"middle": [], |
|
"last": "Kurowsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Wojatzki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of NLP4CMC III: 3rd Workshop on Natural Language Processing for Computer-Mediated Communication", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bj\u00f6rn Ross, Michael Rist, Guillermo Carbonell, Ben Cabrera, Nils Kurowsky, and Michael Wojatzki. 2016. Measuring the Reliability of Hate Speech An- notations: The Case of the European Refugee Cri- sis. In Proceedings of NLP4CMC III: 3rd Workshop on Natural Language Processing for Computer- Mediated Communication, pages 6-9.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Detecting 'dirt' and 'toxicity': Rethinking content moderation as pollution behaviour", |
|
"authors": [ |
|
{ |
|
"first": "Nanna", |
|
"middle": [], |
|
"last": "Thylstrup", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeerak", |
|
"middle": [], |
|
"last": "Waseem", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nanna Thylstrup and Zeerak Waseem. 2020. Detecting 'dirt' and 'toxicity': Rethinking content moderation as pollution behaviour.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Zeerak", |
|
"middle": [], |
|
"last": "Waseem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the NAACL Student Research Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "88--93", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-2013" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Overview of the germeval 2018 shared task on the identification of offensive language", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Wiegand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melanie", |
|
"middle": [], |
|
"last": "Siegel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Ruppenhofer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of GermEval 2018, 14th Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Wiegand, Melanie Siegel, and Josef Ruppen- hofer. 2018. Overview of the germeval 2018 shared task on the identification of offensive language. In Proceedings of GermEval 2018, 14th Conference on Natural Language Processing (KONVENS 2018), Vi- enna, Austria -September 21, 2018.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Ex machina: Personal attacks seen at scale", |
|
"authors": [ |
|
{ |
|
"first": "Ellery", |
|
"middle": [], |
|
"last": "Wulczyn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nithum", |
|
"middle": [], |
|
"last": "Thain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Dixon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1391--1399", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3038912.3052591" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Pro- ceedings of the 26th International Conference on World Wide Web, WWW '17, page 1391-1399, Re- public and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Conversations gone awry: Detecting early signs of conversational failure", |
|
"authors": [ |
|
{ |
|
"first": "Justine", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cristian", |
|
"middle": [], |
|
"last": "Danescu-Niculescu-Mizil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Dixon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiqing", |
|
"middle": [], |
|
"last": "Hua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dario", |
|
"middle": [], |
|
"last": "Taraborelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nithum", |
|
"middle": [], |
|
"last": "Thain", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1350--1361", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1125" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Justine Zhang, Jonathan Chang, Cristian Danescu- Niculescu-Mizil, Lucas Dixon, Yiqing Hua, Dario Taraborelli, and Nithum Thain. 2018. Conversations gone awry: Detecting early signs of conversational failure. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1350-1361, Mel- bourne, Australia. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Length of parent/target posts in characters.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "Histogram (converted to curve) of average toxicity according to annotators who were (IC) or were not (OC) given the parent post when annotating.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "Histogram of context sensitivity. Negative (positive) sensitivity means the toxicity increased (decreased) when context was shown to the annotators.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"text": "Number of context-sensitive posts (|\u03b4| \u2265 t), when varying the context-sensitivity threshold t.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"text": "Oh Don..... you are soooo predictable. oh Chuckie you are such a tattle tale.", |
|
"num": null, |
|
"content": "<table><tr><td>PARENT OF POST p</td><td>POST p</td><td colspan=\"2\">s OC (p) s IC (p)</td><td>\u03b4</td></tr><tr><td/><td/><td>36.6%</td><td>80%</td><td>-43.4%</td></tr><tr><td>Oh Why would you wish them well?</td><td>\"They\"? Who is they? Do all Chinese look</td><td>70%</td><td>0%</td><td>70%</td></tr><tr><td>They've destroyed the environment in their</td><td>alike to you? Or are you just revealing your</td><td/><td/><td/></tr><tr><td>country and now they are coming here to do</td><td>innate bigotry and racism?</td><td/><td/><td/></tr><tr><td>the same.</td><td/><td/><td/><td/></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"text": "Examples of context-sensitive posts in CCC. Here s OC (p) and s IC (p) are the fractions of out-of-context or in-context annotators, respectively, who found the target post p to be toxic; and \u03b4 = s OC (p) \u2212 s IC (p).", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"text": "", |
|
"num": null, |
|
"content": "<table><tr><td>: Mean Squared Error (MSE), Mean Abso-</td></tr><tr><td>lute Error (MAE), Area Under Precision-Recall curve</td></tr><tr><td>(AUPR), and ROC AUC of all context sensitivity esti-</td></tr><tr><td>mation models. An average (B1) and a random (B2)</td></tr><tr><td>baseline have been included. All results averaged over</td></tr><tr><td>three random splits, standard error of mean in brackets.</td></tr></table>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |