|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T05:58:20.766599Z" |
|
}, |
|
"title": "Adult Content Detection on Arabic Twitter: Analysis and Experiments", |
|
"authors": [ |
|
{ |
|
"first": "Hamdy", |
|
"middle": [], |
|
"last": "Mubarak", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Qatar Computing Research Institute Doha", |
|
"location": { |
|
"country": "Qatar" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Sabit", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Qatar Computing Research Institute Doha", |
|
"location": { |
|
"country": "Qatar" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Abdelali", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Qatar Computing Research Institute Doha", |
|
"location": { |
|
"country": "Qatar" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "With Twitter being one of the most popular social media platforms in the Arab region, it is not surprising to find accounts that post adult content in Arabic tweets; despite the fact that these platforms dissuade users from such content. In this paper, we present a dataset of Twitter accounts that post adult content. We perform an in-depth analysis of the nature of this data and contrast it with normal tweet content. Additionally, we present extensive experiments with traditional machine learning models, deep neural networks and contextual embeddings to identify such accounts. We show that from user information alone, we can identify such accounts with F1 score of 94.7% (macro average). With the addition of only one tweet as input, the F1 score rises to 96.8%.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "With Twitter being one of the most popular social media platforms in the Arab region, it is not surprising to find accounts that post adult content in Arabic tweets; despite the fact that these platforms dissuade users from such content. In this paper, we present a dataset of Twitter accounts that post adult content. We perform an in-depth analysis of the nature of this data and contrast it with normal tweet content. Additionally, we present extensive experiments with traditional machine learning models, deep neural networks and contextual embeddings to identify such accounts. We show that from user information alone, we can identify such accounts with F1 score of 94.7% (macro average). With the addition of only one tweet as input, the F1 score rises to 96.8%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Disclaimer: Due to the nature of this research, we provide examples that contain adult language. We follow academic norms to present them in an appropriate form, however the discretion of the reader is cautioned.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In recent years, Twitter has become one of the most popular social media platforms in the Arab region . On average, Arab users post more than 27 million tweets per day (Alshehri et al., 2018) . Such popularity has also spawned a number of spammers who exploit the popularity to post malicious content. Such malicious content may contain pornographic references or advertisement. We refer to such content as adult content. Adult content may have deliberating effects on many, particularly among those of younger age groups. Users who fall for the pornographic advertisements are at risk of losing money and sensitive information to the spammers. Due to the massive amount of user-generated content on Twitter, it is impossible to detect such accounts manually and this calls for automatic detection -the focus of this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 191, |
|
"text": "(Alshehri et al., 2018)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Twitter's policy prohibits users from posting adult content 1 . However, the methods deployed for detecting spams such as adult content are mostly expanded from English and are not very effective for detecting accounts who post adult content in other languages as such as the case of Arabic (Abozinadah et al., 2015) . Traditional methods such as filtering by list of words are not effective since spammers use smart ways such as intentional spelling mistakes to evade such filtering (Alshehri et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 291, |
|
"end": 316, |
|
"text": "(Abozinadah et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 484, |
|
"end": 507, |
|
"text": "(Alshehri et al., 2018)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Despite the dire need of eliminating adult content from Arabic social media, there has been a very few notable works (Alshehri et al., 2018; Abozinadah et al., 2015; Abozinadah and Jones, 2017) in the field. In contrast to the existing work (Alshehri et al., 2018; Abozinadah et al., 2015; Abozinadah and Jones, 2017) that rely on extracting collection of tweets from each account to classify whether they post adult content, we present a dataset and several models aimed at classifying accounts based on minimal information. By minimal information, we mean user information such as username, user description or just one random tweet from each account. In our study, we focus on using textual features to detect adult content and we leave multimedia (e.g. images) and social network features for future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 140, |
|
"text": "(Alshehri et al., 2018;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 141, |
|
"end": 165, |
|
"text": "Abozinadah et al., 2015;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 166, |
|
"end": 193, |
|
"text": "Abozinadah and Jones, 2017)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 264, |
|
"text": "(Alshehri et al., 2018;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 289, |
|
"text": "Abozinadah et al., 2015;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 317, |
|
"text": "Abozinadah and Jones, 2017)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our dataset consists of 6k manually annotated Twitter accounts who post adult content and 44k ordinary Twitter accounts in addition to a tweet from each account (a total of 50k accounts and tweets). We perform extensive analysis of the data to identify characteristics of these accounts. Lastly, we experiment extensively with traditional machine learning models such as Support Vector Machines (SVM) and Multinomial Naive Bayes (MNB), Deep Learning models such as FastText and Contextual Embedding models (BERT) . We analyze contribution of each information available (username, user description, or single tweet) to the performance of the models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 506, |
|
"end": 512, |
|
"text": "(BERT)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Since accounts that post adult-content want to attract others, their usernames and user descriptions are often catchy and contain references that are indicators of them posting adult content. We demonstrate that with just username and user description, we can detect these accounts with macro-averaged F1 score of 94.7%. With addition of single tweet as available information, we achieve macro-averaged F1 score of 96.8%. Detecting accounts who post adult content with minimal information (e.g. from username or description) will allow such accounts to be detected early and possible warning messages can be sent to users to protect them from potential harm or inappropriateness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The contribution of this work can be summed as: 1) Providing the largest dataset of Twitter accounts that is manually annotated for adult content detection in Arabic, and we make it available for researchers. 2) Exploring the dataset to learn silent features used in the domain as well as features related to users and their profiles. We show that user information can be used for early detection of adult accounts even before tweeting, and when they are combined with tweet text, results are improved. 3) Evaluating a number of machine learning and deep neural approaches for classification of adult content.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The paper is structured as follows: In section 2, we discuss related work in the field. In section 3, we describe the data collection method and present analysis of the data. In section 4, we present our experimental setups and results. In section 5, we examine features learned by our best model and perform error analysis that provides insight on how to improve the data and models in the future. Lastly, in section 6, we present conclusions of our work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Despite the fact that many social media platforms enforce rules and conditions about the content shared on their platforms, malicious users attempt to circumvent these rules and guidelines. Researchers have attempted different approaches for exposing malignant content. Spam detection in particular has gained a lot interest among researchers (e.g., (Po-Ching Lin and Po-Min Huang, 2013; Yang et al., 2013; Herzallah et al., 2018; Grier et al., 2010; ). Spam detection is a generalized approach for detecting unsolicited messages. Our focus in this paper is on the more concentrated field of detecting adult-content, which categorically includes pornographic references.", |
|
"cite_spans": [ |
|
{ |
|
"start": 350, |
|
"end": 387, |
|
"text": "(Po-Ching Lin and Po-Min Huang, 2013;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 406, |
|
"text": "Yang et al., 2013;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 407, |
|
"end": 430, |
|
"text": "Herzallah et al., 2018;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 431, |
|
"end": 450, |
|
"text": "Grier et al., 2010;", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For English language, there is a number of works devoted to adult-content detection in terms of analyzing the social networks or the content itself. Mitchell et al. (Mitchell et al., 2003 ) study exposure to adult content and its relation to age/gender. Singh et al. (Singh et al., 2016) propose Random Forest classifier to detect pornographic spammers on Twitter. Cheng et al. (Cheng et al., 2015) propose an iterative graph classification technique for detecting Twitter accounts who post adult content. Harish et al. (Yenala et al., 2017 ) study deep learning based methods for detecting inappropriate content in text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 187, |
|
"text": "Mitchell et al. (Mitchell et al., 2003", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 254, |
|
"end": 287, |
|
"text": "Singh et al. (Singh et al., 2016)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 398, |
|
"text": "(Cheng et al., 2015)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 520, |
|
"end": 540, |
|
"text": "(Yenala et al., 2017", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In Arabic, however, the field of adult-content detection is still relatively unexplored. A related field that has been explored recently in Arabic is abusive/hate-speech detection. There has been a few recent works (Mubarak et al., 2017; Albadi et al., 2018; Hassan et al., 2020a,b) in the areas of offensive and hate-speech detection. However, offensive language and hate-speech have few fundamental differences with adult-content. While offensive language and hate-speech typically consist of profanity and attack on individuals or groups, adult-content may contain profanity but primarily consist of pornographic references. More concentrated work on adult-content detection have been conducted by (Alshehri et al., 2018; Abozinadah et al., 2015; Abozinadah and Jones, 2017) . In (Alshehri et al., 2018), a list of hashtags was used to automatically construct dataset of tweets that contain adult content. In (Abozinadah et al., 2015) , 500 Twitter accounts were manually annotated for adult-content posts. Both (Abozinadah et al., 2015; Alshehri et al., 2018) use traditional machine learning models such as Support Vector Machine (SVM) or Multinomial Naive Bayes (MNB) for classification. Using statistical features of tweet text for classification was proposed in (Abozinadah and Jones, 2017) . Although (Alshehri et al., 2018) perform some analysis of screennames, they do not use them or any other user information for classification. While (Abozinadah et al., 2015) explore number of tweets, followers and following by the accounts, they do not utilize username or user description either. These works rely on collection of tweets from each user for classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 237, |
|
"text": "(Mubarak et al., 2017;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 258, |
|
"text": "Albadi et al., 2018;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 259, |
|
"end": 282, |
|
"text": "Hassan et al., 2020a,b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 701, |
|
"end": 724, |
|
"text": "(Alshehri et al., 2018;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 725, |
|
"end": 749, |
|
"text": "Abozinadah et al., 2015;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 750, |
|
"end": 777, |
|
"text": "Abozinadah and Jones, 2017)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 912, |
|
"end": 937, |
|
"text": "(Abozinadah et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1015, |
|
"end": 1040, |
|
"text": "(Abozinadah et al., 2015;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1041, |
|
"end": 1063, |
|
"text": "Alshehri et al., 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1270, |
|
"end": 1298, |
|
"text": "(Abozinadah and Jones, 2017)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1310, |
|
"end": 1333, |
|
"text": "(Alshehri et al., 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1449, |
|
"end": 1474, |
|
"text": "(Abozinadah et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We describe the method used to collect the dataset, some statistics and observations about it including most frequent words, emojis and hashtags. We show also the geographical distribution of Adult accounts and some differences between our dataset and previous datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "It is common for users on Twitter to describe themselves by providing a header (username), a short bio (description) and a location in their profiles. We noticed that many Arabic speaking profiles that post adult content declare their location in terms of the country or the city that they are from. They use this information mainly to describe themselves and/or to communicate with other users. This information can be found in username, user location or user description. Alshehri et. al in (Alshehri et al., 2018) reported that it's common for user names to have city or country names (ex:", |
|
"cite_spans": [ |
|
{ |
|
"start": 493, |
|
"end": 516, |
|
"text": "(Alshehri et al., 2018)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(bottom from Riyadh)) but in fact this is observed in other profile fields as well. Figure 1 shows sample of profiles for artificial adult accounts where city or country names frequently appear in any of profile fields.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 92, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To build a list of country and city names, we obtained all Arabic country names written in either Arabic, English, or French and their major cities from Wikipedia 2 , and we added adjectives specifying nationalities in masculine and feminine forms, for example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(Egypt, Beirut, Iraqi (m.), Moroccan (f.)) and so on. We call this list \"CountryList\". We used Twitter API to crawl Arabic tweets in March and April 2018 using language filter (\"lang:ar\"). During this period, we collected 25M tweets from which we identified all users who posted these tweets. We considered only accounts that contain any entry from CountryList in their profile fields. By doing so, we obtained a list of 60k accounts and one random tweet from each user. As an initial classification, we provided the result as obtained from the best system reported by (Mubarak and Darwish, 2019) for detecting vulgar tweets. Then we asked an Arabic native speaker who is familiar with different dialects to judge whether an account can be considered as adult or not based on all available textual information: user profile information, a sample tweet, and the automatic initial classification. Profile pictures or network features (e.g. followers and followees) were not used during annotation and this can be explored in the future. The annotator was allowed to check Twitter accounts in case of ambiguous cases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 569, |
|
"end": 596, |
|
"text": "(Mubarak and Darwish, 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Final annotation showed that 6k accounts can be considered as Adult while the rest can be considered as Non-Adult. While the system reported by (Mubarak and Darwish, 2019) achieved F1 = 90 in detecting vulgar language on Egyptian tweets used in communication between users, its performance dropped dramatically due to dialect mismatch and the big differences between vulgar communication and adult content 3 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 171, |
|
"text": "(Mubarak and Darwish, 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To conform with Twitter policy that allows sharing up to 50k public tweets and user objects 4 , we took all Adult accounts and 44k from the Non-Adult accounts to have a total of 50k accounts and tweets. To verify annotation quality, Two annotators reviewed a random sample of 100 accounts and tweets (50 Adult and 50 Non-Adult), and agreement was 100% and 94% in the Adult and Non-Adult classes respectively. Cohen's kappa (\u03ba) was used to measure the Inter-Annotator Agreement (IAA). The Cohen's \u03ba value was 0.94 (p-value < 10e-5) which indicates an \"Almost Perfect\" agreement according to the interpretation of the Kappa value (Landis and Koch, 1977) . Preliminary statistics about the dataset are shown in Table 1 , and it can be downloaded from this link: https://alt.qcri.org/ resources/AdultContentDetection.zip.", |
|
"cite_spans": [ |
|
{ |
|
"start": 640, |
|
"end": 651, |
|
"text": "Koch, 1977)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 708, |
|
"end": 715, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In this subsection, we report some observations about length of Adult and Non-Adult tweets, existence of user mentions, URLs and emojis in both classes, distinguishing words, emojis, and hashtags, etc. Figure 2 -(up) shows that Adult tweets are normally shorter than Non-Adult tweets (9 words ver- This is also confirmed by Figure 2 for the \"@USER\" mentions. They are less common in the Adult tweets. Typically these tweets are not directed to specific persons but are more an attempt to reach a broad audience. In contrast to Adult tweets, there are a large number of Non-Adult tweets that reference specific @USER either in a response or as a mention. We also observe that, in contrast to Non-Adult tweets, Adult tweets use almost 52% more URLs and 32% more emojis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 216, |
|
"text": "-(up)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 210, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 324, |
|
"end": 332, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Diving further in our analysis, we would like to investigate the different words and emojis that discriminate each class; for such, we will employ the valence score (Conover et al., 2011) in this analysis. The valence score \u03d1(t) C helps determine the importance of a given word/symbol t in a given class C while considering its presence or absence in other classes. This includes all tokenenized words and symbols. Given f req(t, AD) and f req(t, N A) representing the frequency of the term t in Adult and Non-Adult classes respectively, the valence is computed as follows: Figure 5 : Distribution of the countries for all accounts (up) and for Adult Accounts (down).", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 187, |
|
"text": "(Conover et al., 2011)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 574, |
|
"end": 582, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u03d1(t) AD = 2 f req(t,AD) N (AD) f req(t,AD) N (AD) + f req(t,N A) N (N A) \u2212 1 (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Where N (AD) and (N (N A)) are the total number of occurrences of all vocabulary in the Adult and Non-Adult classes respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Using Equation 1, we computed the prominencevalence score-for emojis and words in both classes. Figure 3 shows the top most frequent emojis in Adult class, and Figure 4 shows top most frequent words in both classes. Figure 5 shows the geographical distribution of all accounts in the dataset and Adult accounts as obtained from self-declaration in user profile (user location, username, or user description). We use ISO 3166-1 alpha-2 for country codes 5 . As 36% of all accounts in our dataset are from Saudi Arabia (SA), it was expected also to find the largest number of Adult accounts (2801 accounts, 47% of all Adult accounts) to come from the same country.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 104, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 160, |
|
"end": 168, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 224, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We extracted the hashtags that have a valence score of 1 (appear only in the Adult class). The top 150 hashtags list can be downloaded from the same data link: https://alt.qcri.org/resources/ AdultContentDetection.zip.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "It is worth mentioning that from the 100 seed hashtags used in (Alshehri et al., 2018) to collect adult tweets, we found 37 hashtags that are common between the two lists. We found some noisy hashtags (not necessary to be used in adult tweets) in the seed hashtags from (Alshehri et al., 2018) such as: #, #, #, #, # (#baby girl ,#Skype, #marriage, #Arabs, #Omani(f.)), while very strong hashtags such as: #, # (#sex, #lesbian) are missed. We believe that our obtained list of adult hashtags are more accurate and diverse and can be used to extract larger and accurate adult tweets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 86, |
|
"text": "(Alshehri et al., 2018)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To train classifiers for automatic detection of Adult tweets, we split the data into training set (70%), development set (10%), and a test set (20%).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We experiment extensively with 1) different classifiers, and 2) dataset variants with different degree of information available about accounts. Although we conducted experiments on different preprocessing techniques such as removing diacritics or normalizing Arabic text, we did not notice any significant improvement (between 0.1%-0.2%) in performance. We omit these experiments to make room for more significant results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We conduct our experiments with traditional machine learning classifiers Support Vector Machines (SVM) and Multinomial Naive Bayes (MNB), Deep learning based model FastText (Joulin et al., 2016) , and contextual embedding models BERT-Multilingual (Devlin et al., 2019) and AraBERT (Antoun et al., 2020) . FastText and MNB were seen to be outperformed by the other three classifiers. For compactness, we only include the top three classifiers along with the baseline model in our discussion and results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 194, |
|
"text": "(Joulin et al., 2016)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 268, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 302, |
|
"text": "(Antoun et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classifier Description", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Our baseline model simply predicts the majority class, Non-Adult, for every instance. Purpose of the baseline model is to simply act as a reference point for the other classifiers we experimented with.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Model", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "To train our SVM models, we use scikit-learn library 6 .We transform the input text to character and word n-grams vectors using term frequencies (tf)inverse document frequencies (idf) vectorizer. We experiment with different ranges of n-grams for character and words. We experiment by training the SVM 1) on only character n-gram vectors, 2) on only word n-gram vectors, and 3) on both character and word n-gram vectors stacked together. We experimented with ranges from [2-2] to [2-6] for character n-grams and from [1-1] to [1-5] for word n-grams. We found that the results did not improve beyond [2-4] for character n-grams and [1-2] for word n-grams. Only the best results are reported in Table 2 and Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 693, |
|
"end": 712, |
|
"text": "Table 2 and Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Support Vector Machines (SVM)", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "We also experimented with the pre-trained Mazajak word embeddings (skip-gram model trained on 250M tweets) (Abu Farha and Magdy, 2019) as input features for the SVM. Due to its lower performance compared to the n-gram features, we omit these results from the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support Vector Machines (SVM)", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "Deep contextual embedding models such as BERT (Devlin et al., 2019) have been seen to outperform many other models for Natural Language Processing (NLP) tasks. Multilingual BERT is a BERTbased model pre-trained on Wikipedia text of 104 languages that includes Arabic. We fine-tune the model for the task of adult content detection by running it for 4 epochs on the training data with learning rate of 8e-5 using ktrain library (Maiya, 2020).", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 67, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual BERT", |
|
"sec_num": "4.1.3" |
|
}, |
|
{ |
|
"text": "AraBERT (Antoun et al., 2020) is a BERT-based model specifically trained for Arabic language. The model is pre-trained on Arabic Wikipedia and news articles from various sources. Similar to Multilingual BERT, we fine-tune AraBERT for 4 epochs with learning rate of 8e-5 using ktrain library (Maiya, 2020).", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 29, |
|
"text": "(Antoun et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "AraBERT", |
|
"sec_num": "4.1.4" |
|
}, |
|
{ |
|
"text": "One of our primary goals is to understand how much information is required to detect accounts who post Adult tweets. To achieve this, we examine contribution of different information available about the Twitter accounts. We also evaluate the classifier when combinations of these information are made available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Variants", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We compare performance of the classifiers when they have access to only i)username, ii) screen name, iii) user description, or iv) single tweet from an account.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual Information", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "We give the classifiers access to increasingly more information to evaluate how their performance change. We notice that addition of screen name does not contribute to any improvement in performance, and thus, it is excluded from our discussion. We discuss change in performance when i) other user information (username and user description) is combined and, ii) the user information is combined with a single tweet from the account. To combine information, we concatenate the strings representing user information and the single tweet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination of Information", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "In Table 2 and Table 3 , we present results of different models on variants of information available. We report precision (P), recall (R), and F1 for the Adult class on the test set. We also report the macroaveraged F1 (mF1), i.e. average of F1 for the Adult and Non-Adult classes because the data is not balanced. We use mF1 metric for comparison in our discussion. The key findings are listed below.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 22, |
|
"text": "Table 2 and Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 Among the different individual user information available (username, screen name, description), usernames of Twitter accounts carry most importance. From usernames alone, SVMs trained with character ngram features achieve mF1 score of 87.1, an increase of 40.4 from baseline (46.7). Screen name has very little importance as it increases mF1 by only 10.2 from baseline. User description alone results in mF1 score of 86.6 with AraBERT model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 When username and user description are combined, we get a notable spike in performance -mF1 score of 94.7, an increase of 48 from baseline. This is achieved by SVM when character and word n-grams are combined.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 From a single tweet, the maximum mF1 score achieved is 88.9, an increase of 42.2 from baseline. This is also achieved by SVM with character and word n-gram vectors as features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 When a single tweet is added to username and user description, the maximum mF1 score achieved is 96.8, an increase of 50.1 from baseline and an increase of 2.1 from user infor- mation alone. This is achieved by AraBERT model and is our best-performing model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 SVM trained on word n-gram features alone is outperformed by other classifiers in all cases. It's behind by about 2 in mF1 score compared to the best system in each case. This suggests character-level/contextual information are important for detecting adult content.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 SVMs trained on character n-gram, combination of character and word n-gram, MultiB-ERT and AraBERT are very close to each other. For example, in the case of user-name+user description+tweet, the maximum difference between their performances is 0.5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The confusion matrix of predictions by our best system, AraBERT trained on user information+tweet, is shown in Table 4 . We manually analyzed all classification errors and these errors can be summarized as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 118, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Non-Adult accounts that are detected as Adult: this occurred 65 times. We found that in 70% of these cases, they were annotated incorrectly in the reference, for example when an account has (cash and serious) in either user information or tweet text, this account should be marked as Adult as such term is commonly used by Adult accounts. This suggests that automatic classification can be used iteratively to detect possible annotation errors. The rest of the errors were due to the existence of frequently-used words in Adult accounts such as (massage) but these words can be used also by Non-Adult accounts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Adult accounts that are detected as Non-Adult: this occurred 74 times. Only 7 of these cases were due to errors in the reference annotation while majority of errors were due to: i) using unseen words in the training data (ex: creative spelling of some dialectal adult words); ii) complex cases where combining features in user profiles can intuitively reveal adult accounts to human annotators, e.g. when a screen name is \"K3Eut8i8t3pFMy...\" and the user described himself as (extraordinary romantic) and the tweet is an invitation to come in private. For classifiers, it maybe difficult to capture such complex intuition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We presented a dataset for detecting Twitter accounts who post adult content in Arabic tweets. We performed extensive analysis of the data to identify characteristics of such accounts. In our experiments, we have shown that Support Vector Machines and contextual embedding models AraBERT and Multilingual BERT can detect these accounts with impressive reliability while having access to minimal information about the accounts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In the future, we aim to explore if similar methods can be adopted to identify accounts who post other variants of undesirable content such as unsolicited advertisement. Also, we plan to experiment tools that detect adult content in multimedia (e.g. in images) and compare performance with our model that depends only on textual information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://help.twitter.com/en/rules-and-policies/mediapolicy", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://en.wikipedia.org/wiki/List_ of_countries_by_largest_and_second_ largest_cities", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Out of 5,854 accounts classified automatically as vulgar, only 825 accounts are manually classified as adult (14%).4 https://developer.twitter.com/en/ developer-terms/policy", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://en.wikipedia.org/wiki/List_ of_ISO_3166_country_codes", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://scikit-learn.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Arabic dialect identification in the wild", |
|
"authors": [ |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Abdelali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hamdy", |
|
"middle": [], |
|
"last": "Mubarak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Younes", |
|
"middle": [], |
|
"last": "Samih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabit", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.06557" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahmed Abdelali, Hamdy Mubarak, Younes Samih, Sabit Hassan, and Kareem Darwish. 2020. Arabic dialect identification in the wild. arXiv preprint arXiv:2005.06557.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Detection of abusive accounts with arabic tweets", |
|
"authors": [ |
|
{ |
|
"first": "Ehab", |
|
"middle": [], |
|
"last": "Abozinadah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Mbaziira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.7763/IJKE.2015.V1.19" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ehab Abozinadah, Alex Mbaziira, and James Jr. 2015. Detection of abusive accounts with arabic tweets. volume 1.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A statistical learning approach to detect abusive twitter accounts", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ehab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Abozinadah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the International Conference on Compute and Data Analysis, ICCDA '17", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6--13", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3093241.3093281" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ehab A. Abozinadah and James H. Jones. 2017. A sta- tistical learning approach to detect abusive twitter ac- counts. In Proceedings of the International Confer- ence on Compute and Data Analysis, ICCDA '17, page 6-13, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Mazajak: An online Arabic sentiment analyser", |
|
"authors": [ |
|
{ |
|
"first": "Ibrahim", |
|
"middle": [], |
|
"last": "Abu Farha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Walid", |
|
"middle": [], |
|
"last": "Magdy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "192--198", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4621" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ibrahim Abu Farha and Walid Magdy. 2019. Mazajak: An online Arabic sentiment analyser. In Proceed- ings of the Fourth Arabic Natural Language Process- ing Workshop, pages 192-198, Florence, Italy. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Are they our brothers? analysis and detection of religious hate speech in the arabic twittersphere", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Albadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kurdi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Mishra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "69--76", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ASONAM.2018.8508247" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "N. Albadi, M. Kurdi, and S. Mishra. 2018. Are they our brothers? analysis and detection of religious hate speech in the arabic twittersphere. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 69-76.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Think before your click: Data and models for adult content in arabic twitter", |
|
"authors": [], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ali Alshehri, El Moatez Billah Nagoudi, Hassan Alhuzali, and Muhammad Abdul-Mageed. 2018. Think before your click: Data and models for adult content in arabic twitter.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Arabert: Transformer-based model for arabic language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Wissam", |
|
"middle": [], |
|
"last": "Antoun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fady", |
|
"middle": [], |
|
"last": "Baly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hazem", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Hajj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wissam Antoun, Fady Baly, and Hazem M. Hajj. 2020. Arabert: Transformer-based model for arabic lan- guage understanding. ArXiv, abs/2003.00104.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Isc: An iterative social based classifier for adult account detection on twitter", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Lv", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE Transactions on Knowledge and Data Engineering", |
|
"volume": "27", |
|
"issue": "4", |
|
"pages": "1045--1056", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Cheng, X. Xing, X. Liu, and Q. Lv. 2015. Isc: An iterative social based classifier for adult account de- tection on twitter. IEEE Transactions on Knowledge and Data Engineering, 27(4):1045-1056.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "@spam: The underground on 140 characters or less", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Grier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kurt", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vern", |
|
"middle": [], |
|
"last": "Paxson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 17th ACM Conference on Computer and Communications Security, CCS '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "27--37", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/1866307.1866311" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Grier, Kurt Thomas, Vern Paxson, and Michael Zhang. 2010. @spam: The underground on 140 characters or less. In Proceedings of the 17th ACM Conference on Computer and Communications Se- curity, CCS '10, page 27-37, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "ALT at SemEval-2020 task 12: Arabic and English offensive language identification in social media", |
|
"authors": [ |
|
{ |
|
"first": "Sabit", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Younes", |
|
"middle": [], |
|
"last": "Samih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hamdy", |
|
"middle": [], |
|
"last": "Mubarak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Abdelali", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1891--1897", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sabit Hassan, Younes Samih, Hamdy Mubarak, and Ahmed Abdelali. 2020a. ALT at SemEval-2020 task 12: Arabic and English offensive language iden- tification in social media. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1891-1897, Barcelona (online). International Com- mittee for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Ammar Rashed, and Shammur Chowdhury. 2020b. Alt submission for osact shared task on offensive language detection", |
|
"authors": [ |
|
{ |
|
"first": "Sabit", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Younes", |
|
"middle": [], |
|
"last": "Samih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hamdy", |
|
"middle": [], |
|
"last": "Mubarak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Abdelali", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--65", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sabit Hassan, Younes Samih, Hamdy Mubarak, Ahmed Abdelali, Ammar Rashed, and Shammur Chowd- hury. 2020b. Alt submission for osact shared task on offensive language detection. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools (LREC 2020), page 61-65.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Feature engineering for detecting spammers on twitter: Modelling and analysis", |
|
"authors": [ |
|
{ |
|
"first": "Wafa", |
|
"middle": [], |
|
"last": "Herzallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hossam", |
|
"middle": [], |
|
"last": "Faris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omar", |
|
"middle": [], |
|
"last": "Adwan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Journal of Information Science", |
|
"volume": "44", |
|
"issue": "2", |
|
"pages": "230--247", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1177/0165551516684296" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wafa Herzallah, Hossam Faris, and Omar Adwan. 2018. Feature engineering for detecting spammers on twitter: Modelling and analysis. Journal of Infor- mation Science, 44(2):230-247.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Fasttext.zip: Compressing text classification models", |
|
"authors": [ |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthijs", |
|
"middle": [], |
|
"last": "Douze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herv\u00e9", |
|
"middle": [], |
|
"last": "J\u00e9gou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herv\u00e9 J\u00e9gou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. CoRR, abs/1612.03651.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The measurement of observer agreement for categorical data", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Landis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gary", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Koch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "Biometrics", |
|
"volume": "33", |
|
"issue": "1", |
|
"pages": "159--174", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Richard Landis and Gary G. Koch. 1977. The mea- surement of observer agreement for categorical data. Biometrics, 33(1):159-174.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "2020. ktrain: A low-code library for augmented machine learning", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Arun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Maiya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.10703" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arun S. Maiya. 2020. ktrain: A low-code li- brary for augmented machine learning. arXiv, arXiv:2004.10703 [cs.LG].", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The exposure of youth to unwanted sexual material on the internet: A national survey of risk, impact, and prevention", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kimberly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janis", |
|
"middle": [], |
|
"last": "Finkelhor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wolak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Youth & Society", |
|
"volume": "34", |
|
"issue": "3", |
|
"pages": "330--358", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kimberly J Mitchell, David Finkelhor, and Janis Wolak. 2003. The exposure of youth to unwanted sexual material on the internet: A national survey of risk, impact, and prevention. Youth & Society, 34(3):330- 358.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Spam detection on arabic twitter", |
|
"authors": [ |
|
{ |
|
"first": "Hamdy", |
|
"middle": [], |
|
"last": "Mubarak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Abdelali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabit", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Social Informatics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "237--251", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hamdy Mubarak, Ahmed Abdelali, Sabit Hassan, and Kareem Darwish. 2020. Spam detection on ara- bic twitter. In Social Informatics, pages 237-251, Cham. Springer International Publishing.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Arabic offensive language classification on twitter", |
|
"authors": [ |
|
{ |
|
"first": "Hamdy", |
|
"middle": [], |
|
"last": "Mubarak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Social Informatics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "269--276", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hamdy Mubarak and Kareem Darwish. 2019. Arabic offensive language classification on twitter. In In- ternational Conference on Social Informatics, pages 269-276. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Abusive language detection on Arabic social media", |
|
"authors": [ |
|
{ |
|
"first": "Hamdy", |
|
"middle": [], |
|
"last": "Mubarak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Walid", |
|
"middle": [], |
|
"last": "Magdy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the First Workshop on Abusive Language Online", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "52--56", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-3008" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hamdy Mubarak, Kareem Darwish, and Walid Magdy. 2017. Abusive language detection on Arabic social media. In Proceedings of the First Workshop on Abu- sive Language Online, pages 52-56, Vancouver, BC, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A study of effective features for detecting long-surviving twitter spam accounts", |
|
"authors": [ |
|
{ |
|
"first": "Ching", |
|
"middle": [], |
|
"last": "Po", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Po-Min", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "2013 15th International Conference on Advanced Communications Technology (ICACT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "841--846", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Po-Ching Lin and Po-Min Huang. 2013. A study of effective features for detecting long-surviving twit- ter spam accounts. In 2013 15th International Con- ference on Advanced Communications Technology (ICACT), pages 841-846.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Behavioral analysis and classification of spammers distributing pornographic content in social media. Social Network Analysis and Mining", |
|
"authors": [ |
|
{ |
|
"first": "Monika", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Divya", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjeev", |
|
"middle": [], |
|
"last": "Sofat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/s13278-016-0350-0" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Monika Singh, Divya Bansal, and Sanjeev Sofat. 2016. Behavioral analysis and classification of spammers distributing pornographic content in social media. Social Network Analysis and Mining, 6.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Empirical evaluation and new design for fighting evolving twitter spammers", |
|
"authors": [ |
|
{ |
|
"first": "Chao", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Harkreader", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guofei", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "IEEE Transactions on Information Forensics and Security", |
|
"volume": "8", |
|
"issue": "8", |
|
"pages": "1280--1293", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/tifs.2013.2267732" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chao Yang, Robert Harkreader, and Guofei Gu. 2013. Empirical evaluation and new design for fighting evolving twitter spammers. IEEE Transactions on Information Forensics and Security, 8(8):1280- 1293.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Deep learning for detecting inappropriate content in text", |
|
"authors": [ |
|
{ |
|
"first": "Harish", |
|
"middle": [], |
|
"last": "Yenala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Jhanwar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manoj", |
|
"middle": [], |
|
"last": "Chinnakotla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jay", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Journal of Data Science and Analytics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/s41060-017-0088-4" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harish Yenala, Ashish Jhanwar, Manoj Chinnakotla, and Jay Goyal. 2017. Deep learning for detecting inappropriate content in text. International Journal of Data Science and Analytics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "User profile on Twitter for male and female artificial adult accounts", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Top 10 emojis for Adult class with valence score \u03d1(.) >= 0.98.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "Word cloud for Adult (left) and Non-Adult (right) user information. Most Adult words are related to genitals and sexual actions while most of the Non-Adult words are related to religion, politics, sports, etc.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"text": "", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"text": "Dataset statistics. Tokens and Types (unique Tokens) are calculated for tweet text.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Tweets (also Accounts)</td><td>%</td><td colspan=\"2\">Tokens Types</td></tr><tr><td>Adult</td><td>6k</td><td>12%</td><td>59k</td><td>19k</td></tr><tr><td>Not Adult</td><td>44k</td><td>88%</td><td>707k</td><td>195k</td></tr><tr><td>Total</td><td>50k</td><td/><td>766k</td><td>201k</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"text": "Performance on user information", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>model</td><td>feats.</td><td>P</td><td colspan=\"2\">screen name R F1</td><td>mF1</td><td>P</td><td colspan=\"2\">username R F1</td><td>mF1</td><td>P</td><td colspan=\"2\">user description R F1</td><td>mF1</td></tr><tr><td>baseline</td><td>-</td><td>0.0</td><td>0.0</td><td>0.0</td><td>46.7</td><td>0.0</td><td>0.0</td><td>0.0</td><td>46.7</td><td>0.0</td><td>0.0</td><td>0.0</td><td>46.7</td></tr><tr><td>SVM</td><td>c[2-4]</td><td colspan=\"4\">53.1 12.6 20.4 56.9</td><td>93.8</td><td>65.3</td><td>77</td><td>87.1</td><td colspan=\"4\">94.8 61.6 74.7 85.9</td></tr><tr><td>SVM</td><td>w[1-2]</td><td>0.0</td><td>0.0</td><td>0.0</td><td>46.7</td><td>89.0</td><td>61.8</td><td colspan=\"2\">72.9 84.9</td><td colspan=\"4\">93.0 58.1 71.5 84.2</td></tr><tr><td>SVM</td><td>c[2-4],</td><td>81.5</td><td>7.9</td><td colspan=\"2\">14.3 54.1</td><td>91.2</td><td>66.2</td><td colspan=\"2\">76.7 87.0</td><td colspan=\"4\">94.8 62.3 75.2 86.2</td></tr><tr><td/><td>w[1-2]</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Multi-</td><td>-</td><td>0.0</td><td>0.0</td><td>0.0</td><td>46.7</td><td>85.6</td><td>64.2</td><td colspan=\"2\">73.4 85.1</td><td colspan=\"2\">90.1 65.8</td><td>76</td><td>86.6</td></tr><tr><td>BERT</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Ara-</td><td>-</td><td>0.0</td><td>0.0</td><td>0.0</td><td>46.7</td><td colspan=\"4\">85.1 65.34 73.9 85.4</td><td colspan=\"2\">92.1 64.8</td><td>76</td><td>86.6</td></tr><tr><td>BERT</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"text": "Performance on tweet and combination of tweet + user information", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">username+user</td><td/><td/><td colspan=\"2\">tweet</td><td/><td/><td colspan=\"2\">username+user</td><td/></tr><tr><td>model</td><td>feats.</td><td/><td colspan=\"2\">description</td><td/><td/><td/><td/><td/><td/><td colspan=\"3\">description+tweet</td></tr><tr><td/><td/><td>P</td><td>R</td><td>F1</td><td>mF1</td><td>P</td><td>R</td><td>F1</td><td>mF1</td><td>P</td><td>R</td><td>F1</td><td>mF1</td></tr><tr><td>baseline</td><td>-</td><td>0.0</td><td>0.0</td><td>0.0</td><td>46.7</td><td>0.0</td><td>0.0</td><td>0.0</td><td>46.7</td><td>0.0</td><td>0.0</td><td>0.0</td><td>46.7</td></tr><tr><td>SVM</td><td>c[2-4]</td><td colspan=\"4\">96.6 84.7 90.3 94.5</td><td colspan=\"4\">88.5 70.9 78.7 88.0</td><td colspan=\"4\">96.3 91.7 94.0 96.6</td></tr><tr><td>SVM</td><td>w[1-2]</td><td colspan=\"4\">92.2 82.5 87.1 92.7</td><td colspan=\"4\">85.5 71.3 77.8 87.5</td><td colspan=\"4\">93.5 90.1 91.8 95.3</td></tr><tr><td>SVM</td><td>c[2-4],</td><td colspan=\"4\">96.1 85.8 90.7 94.7</td><td colspan=\"4\">87.4 74.5 80.4 88.9</td><td colspan=\"4\">95.3 93.0 94.1 96.6</td></tr><tr><td/><td>w[1-2]</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Multi-</td><td>-</td><td colspan=\"4\">93.4 86.4 89.8 94.2</td><td colspan=\"4\">83.4 73.8 78.3 87.7</td><td colspan=\"4\">94.4 92.6 93.5 96.3</td></tr><tr><td>BERT</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Ara-</td><td>-</td><td colspan=\"4\">91.1 88.3 89.7 94.1</td><td colspan=\"4\">82.2 76.1 79.1 88.1</td><td colspan=\"4\">94.7 94.0 94.4 96.8</td></tr><tr><td>BERT</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"text": "Confusion matrix of AraBERT model", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td colspan=\"2\">Predicted</td></tr><tr><td/><td/><td colspan=\"2\">Adult Non-Adult</td></tr><tr><td>Reference</td><td>Adult Non-Adult</td><td>1161 65</td><td>74 8700</td></tr></table>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |