ACL-OCL / Base_JSON /prefixW /json /wnut /2020.wnut-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:34:08.593051Z"
},
"title": "Detecting Trending Terms in Cybersecurity Forum Discussions",
"authors": [
{
"first": "Jack",
"middle": [],
"last": "Hughes",
"suffix": "",
"affiliation": {
"laboratory": "Computer Laboratory",
"institution": "University of Cambridge",
"location": {
"country": "U.K"
}
},
"email": ""
},
{
"first": "Seth",
"middle": [],
"last": "Aycock",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "U.K"
}
},
"email": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Caines",
"suffix": "",
"affiliation": {
"laboratory": "Computer Laboratory",
"institution": "University of Cambridge",
"location": {
"country": "U.K"
}
},
"email": ""
},
{
"first": "Paula",
"middle": [],
"last": "Buttery",
"suffix": "",
"affiliation": {
"laboratory": "Computer Laboratory",
"institution": "University of Cambridge",
"location": {
"country": "U.K"
}
},
"email": ""
},
{
"first": "Alice",
"middle": [],
"last": "Hutchings",
"suffix": "",
"affiliation": {
"laboratory": "Computer Laboratory",
"institution": "University of Cambridge",
"location": {
"country": "U.K"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a lightweight method for identifying currently trending terms in relation to a known prior of terms, using a weighted logodds ratio with an informative prior. We apply this method to a dataset of posts from an English-language underground hacking forum, spanning over ten years of activity, with posts containing misspellings, orthographic variation, acronyms, and slang. Our statistical approach supports analysis of linguistic change and discussion topics over time, without a requirement to train a topic model for each time interval for analysis. We evaluate the approach by comparing the results to TF-IDF using the discounted cumulative gain metric with human annotations, finding our method outperforms TF-IDF on information retrieval.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a lightweight method for identifying currently trending terms in relation to a known prior of terms, using a weighted logodds ratio with an informative prior. We apply this method to a dataset of posts from an English-language underground hacking forum, spanning over ten years of activity, with posts containing misspellings, orthographic variation, acronyms, and slang. Our statistical approach supports analysis of linguistic change and discussion topics over time, without a requirement to train a topic model for each time interval for analysis. We evaluate the approach by comparing the results to TF-IDF using the discounted cumulative gain metric with human annotations, finding our method outperforms TF-IDF on information retrieval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Underground hacking forums contain a large collection of noisy text data around various topics, with misspellings, changing lexicons, and slang phrases. The evolving domain-specific lexicon includes homonyms, where \"rat\" may be identified as an animal by off-the-shelf tools, but is typically defined as a \"remote access trojan\" in this context, a type of malware used to gain access to a victim's computer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We work with texts from the HackForums site 1 , the largest English-language hacking forum, with multiple bulletin boards arranged around various topics, and many active users submitting thousands of new posts every day. The dataset contains over a decade of text data, but detecting trends is nontrivial, due to the informal language used by members, not only technical terms such as \"rat\", but also misspellings, slang, orthographic variation, and acronyms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For instance, the following texts demonstrate how posts are structured into threads on given top-1 https://hackforums.net ics, and how users both deliberately and accidentally use noisy language 2 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "User1 Ransomware infects hospitals all over UK: link User2 anyone think they made some money from this? User1 They might of done but idk they'll get caught eventually, it's stupid to commit crimes like this User3 Who tf targets hospitals for ransomeware User1 I dont believe they actually went for the nhs.. the ransom would be more $$$ lol User4 I looked up a few btc addresses and can confirm they made money",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Researchers interested in analysing hot topics on the forum will find it hard to gain a clear perspective on this due to the volume of data going through the forum every day. Therefore, an overview of trending topics with natural language processing and statistical techniques is useful for identifying what may be of interest to security researchers. We propose a tool to identify tokens from trending topics, by pre-tokenising post data, followed by adapting a statistical technique for measuring changes, which can be used to scan across the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The tool builds upon a weighted log-odds ratio (Monroe et al., 2008) with an informative Bayesian prior (Silge et al., 2020) , used to compare differences in two corpora. In our case the corpora represents two distinct time periods of interest within the same subforum 3 . For known events, one period can be a set of texts preceding the event (the prior) and the other period can be texts following the event (the target). Also, the tool can be used for live listings of trending terms in the present day, by comparing new posts against some fixed prior.",
"cite_spans": [
{
"start": 47,
"end": 68,
"text": "(Monroe et al., 2008)",
"ref_id": "BIBREF22"
},
{
"start": 104,
"end": 124,
"text": "(Silge et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our method identifies the relative importance of tokens to each time period. The log-odds ratio indicates whether terms are more likely to appear in a given corpus over others. A log-odds score is higher for terms that are both unique and more frequent to a given period. Other NLP methods require the removal of pre-defined stopwords. However, for our approach, as stopwords have a similar distribution across both time periods, they will have a low log-odds score and rank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The tool looks at \"bursty\" events: for a token to be trending, frequency of the token should be significantly different between the prior and target periods, and be more frequent than other terms in the target period. For identifying topics, our method uses a feature-pivot approach (a topic is a cluster of keywords) over a document-pivot approach (a topic is a cluster of documents). The latter may struggle with documents about multiple topics, whereas the former may incorrectly identify correlations between words as topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A major challenge in developing the tool is that it is to be used on a large dataset of noisy data, for exploring the evolution of underground hacking forums. We take mitigating steps such as storing the pre-tokenised and part-of-speech tagged text, to decrease computation time for longitudinal analysis. While we focus on a cybercrime context, we note this type of data has similarities to Twitter data: short posts, and informal language. However, while Twitter data has some minimal inter-tweet connections through hashtags, quoting comments and replies, forums have a rigid discussion-based structure set by the forum administrators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We adapt a technique used for capturing the linguistic changes between two corpora, to be used as a trending topics tool for temporal analysis of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show the application of this trending topics tool in the context of cybercrime research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Related work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Term-frequency inverse-document-frequency (TF-IDF) (Sp\u00e4rck Jones, 1972) identifies common terms in a document, but not common across all documents. This technique provides a mechanism for ranking tokens which are \"important\" to a document. However, forum text is noisy, with varying spelling of words and creative use of punctuation. While TF-IDF is a popular NLP technique, use on forum data would require stemming or lemmatisation, and defining a document either as individual posts, or a thread of posts, for best performance.",
"cite_spans": [
{
"start": 59,
"end": 71,
"text": "Jones, 1972)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TF-IDF",
"sec_num": "2.1"
},
{
"text": "TF-IDF assumes each document is based on a single topic, although with forum data, posts and threads may discuss several topics. LDA (Blei et al., 2003) takes a different approach by assuming each document is built from a number of topics, with one primary topic, by learning a distribution of terms in topics. Similar to TF-IDF, this method also requires finding a suitable tokenisation approach and representation of a document. Also, while LDA learns a distribution of terms in topics, this is not as lightweight computationally as TF-IDF.",
"cite_spans": [
{
"start": 133,
"end": 152,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LDA",
"sec_num": "2.2"
},
{
"text": "TF-IDF and LDA are both commonly used, but these both have limitations, and improved models have been proposed. Burst and dynamic topic models have been used for detecting trending topics, including a burst model proposed by Kleinberg (2003) , and Takahashi et al. (2012); Koike et al. (2013) who combine Kleinberg's burst model with a dynamic topic model. While these approaches measure frequency changes over time to detect bursts, we use a different approach similar to \"two-point trends\" discussed by Kleinberg (2016) with \"rising\" and \"falling\" words. In addition, we use a Bayesian approach instead of measuring absolute change.",
"cite_spans": [
{
"start": 225,
"end": 241,
"text": "Kleinberg (2003)",
"ref_id": "BIBREF15"
},
{
"start": 273,
"end": 292,
"text": "Koike et al. (2013)",
"ref_id": "BIBREF17"
},
{
"start": 505,
"end": 521,
"text": "Kleinberg (2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trending Topic Techniques",
"sec_num": "2.3"
},
{
"text": "Aiello et al. (2013) explored common NLP methods for detecting trending topics on Twitter, related to major events which differ in time scale and topic churn rates, and suggest later work should look at topics evolving in parallel. They found n-gram cooccurrence (groups of words typically appearing in the same document), and DF-IDF t topic ranking (an adaptation of TF-IDF to look for common topics unique to a given time period in comparison to prior time periods) to perform the best. They also boosted the score of proper nouns in their approach, finding these are useful keywords for trending topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trending Topic Techniques",
"sec_num": "2.3"
},
{
"text": "Follow-up work by Martin et al. (2015) detected bursts of phrases for a topic detection system, using DF-IDF t to group co-occurring bursty phrases, followed by topic ranking, using the apriori algorithm. They also look at windowing, where events which are focused on real-time activity (e.g sports) have a smaller window of activity, with greater topic recall than longer topics (e.g. politics) with discussions continuing after events. Super Tuesday (the Tuesday in which many US states hold their primary elections) performed better with fewer prior tweets as this was a longer event, than others which performed better with a longer window.",
"cite_spans": [
{
"start": 18,
"end": 38,
"text": "Martin et al. (2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trending Topic Techniques",
"sec_num": "2.3"
},
{
"text": "Previous research has focused on static snapshots of events, whereas Shamma et al. 2011used temporal analysis to identify both peaky and persistent topics. Trending topics tools which are sensitive to noise may only detect peaky topics over persistent topics. They used normalised term frequency, with the number of tweets containing the word, rather than the number of times a word is used, and the peaks look at terms particular to an exact window of time. Persistence looks at peaks of normalised term frequency, assuming these terms have not been used before, and have been used more frequently afterwards.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trending Topic Techniques",
"sec_num": "2.3"
},
{
"text": "While much of the literature focuses on detecting English-language trending topics, many cybercrime forums are not English-speaking, which can add complexity into analysis. Also, there are some cases where topic modelling may produce poor quality results, and could be refined with user feedback, which is explored by Hu et al. (2014) with iterating models (hierarchical-LDA trees).",
"cite_spans": [
{
"start": 318,
"end": 334,
"text": "Hu et al. (2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trending Topic Techniques",
"sec_num": "2.3"
},
{
"text": "Monroe et al. 2008introduced a method for comparing lexical tokens used by two political parties. This uses a model-based approach, modelling terms as a function of political party, to compute the likelihood of terms used by a political party as log likelihoods (\"log-odds\"). They used an uninformative Dirichlet prior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fightin' Words paper",
"sec_num": "2.4"
},
{
"text": "We adapt this method to time-based analysis, modelling terms as a function of time. We use an informative Bayes prior, which was used in the R tidylo library by Silge et al. (2020) . While this method was initially used to compare two distinctly different political party news corpora, we adapt this to examine a longitudinal dataset to explore how a particular corpus has changed over time.",
"cite_spans": [
{
"start": 161,
"end": 180,
"text": "Silge et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fightin' Words paper",
"sec_num": "2.4"
},
{
"text": "Our method uses a Bayesian approach to identifying trending topics, with filtering by noun phrases using a part-of-speech (PoS) tagger. However, an alternative approach may use named entity recognition to detect trending topics, and later, for extracting events from text. However, Caines et al. (2018) note named entity recognisers are trained on well-formed English text, and their performance is degraded with noisy text.",
"cite_spans": [
{
"start": 282,
"end": 302,
"text": "Caines et al. (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Recognition",
"sec_num": "2.5"
},
{
"text": "There has been prior work in using NER on noisy text, including with a shared challenge at W-NUT 2017 (Derczynski et al., 2017) . One approach by Aguilar et al. (2017) used a convolutional neural network with both character-level and word-level features combined with contextual information, input into a bidirectional LSTM, for this task. Jansson and Liu (2017) also used a bidirectional LSTM for word and character embeddings, but combined these with an LDA topic model.",
"cite_spans": [
{
"start": 102,
"end": 127,
"text": "(Derczynski et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 146,
"end": 167,
"text": "Aguilar et al. (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Recognition",
"sec_num": "2.5"
},
{
"text": "Additionally, contextual data can be used to assist with this task. Xing and Paul (2017) combined word embeddings with Twitter network and geolocation data to improve the accuracy of NER. While we do not have access to this type of data about HackForums users, the forum structure provides hierarchy with administrator defined subforums, which could be used as a feature to combine with embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Recognition",
"sec_num": "2.5"
},
{
"text": "Work into trending topics in cybercrime has focused on identifying new threats, using data from tweets, blogs, and underground forums. This includes the creation of large-scale frameworks, such as Sapienza et al. (2018) who detect emerging threats across datasets, although this depends on annotations of known keywords. This is problematic for cybercrime research, due to the constantly changing lexicon. Behzadan et al. (2018) released a tool to assist annotators in exploring Twitter data, with an annotated dataset of 21,000 tweets on cyber threats. However, this still requires manual identification of new terms.",
"cite_spans": [
{
"start": 197,
"end": 219,
"text": "Sapienza et al. (2018)",
"ref_id": "BIBREF26"
},
{
"start": 406,
"end": 428,
"text": "Behzadan et al. (2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cybercrime trending topics",
"sec_num": "2.6"
},
{
"text": "Once a trending topic is identified, topic ranking is needed, to avoid overwhelming a user. This is used to highlight current important topics, including Bose et al. (2019) who use this to detect and flag known serious threats.",
"cite_spans": [
{
"start": 154,
"end": 172,
"text": "Bose et al. (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cybercrime trending topics",
"sec_num": "2.6"
},
{
"text": "Also, other approaches such as PoS-tagging and sentiment analysis have been used to identify threats, such as work by Macdonald et al. (2015) , however there is a range of jargon used on the forum, with spelling variations and changes to meaning over time, which models would need to handle. There have been other approaches to look at trends on forums and marketplaces, including Tavabi et al. (2019) who use a large topic model to map the evolution of different forums as they evolve. These communities also evolve over time, with changing meanings of words, and an evolving lexicon, which should be taken into account with longitudinal topic modelling. Bhandari and Armstrong (2019) have looked at subforums of Reddit to explore the use of high affinity terms used by communities, looking at how the semantics of these have changed.",
"cite_spans": [
{
"start": 118,
"end": 141,
"text": "Macdonald et al. (2015)",
"ref_id": "BIBREF19"
},
{
"start": 381,
"end": 401,
"text": "Tavabi et al. (2019)",
"ref_id": "BIBREF32"
},
{
"start": 656,
"end": 685,
"text": "Bhandari and Armstrong (2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cybercrime trending topics",
"sec_num": "2.6"
},
{
"text": "For our method, we use the CrimeBB dataset from the Cambridge Cybercrime Centre (Pastrana et al., 2018b) , available for researcher use from the Cambridge Cybercrime Centre 4 . CrimeBB contains posts scraped from 27 underground and dark web forums related to cybercrime, with over 13 years of post data. The database contains English, Russian, and German-language forums. Each forum is structured by subforums, which are based on general topics e.g. hacking methods or marketplace, and are defined by the forum administrators. Each subforum contains threads, which are an ordered collection of posts focusing on a defined topic set by the first post in the thread, such as a particular tutorial the author is sharing. Later posts can be providing a reply to the original first post, a reply to a later post by another user, or new information on the topic. While threads are typically focused on a particular topic, longer threads may become off-topic.",
"cite_spans": [
{
"start": 80,
"end": 104,
"text": "(Pastrana et al., 2018b)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "We selected HackForums from this dataset for our evaluation, which is an underground hacking forum discussing various aspects of hacking techniques. Our dataset contains over 190 administrator-curated subforums, with 4 million threads, and 42 million posts, created by over 630,000 members of the forum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "The method is selected due to the focus of the dataset: the data is \"noisy\", containing variations of spelling (e.g., \"ransomeware\" instead of \"ransomware\"), orthography (e.g., \"NK\" and \"nk\" for North Korea), and length of posts (ranging from short replies \"pm me\" to longer in-depth tutorials). In addition, due to the size of the dataset, our method requires a lightweight approach in order to measure the evolution of trends and topics over time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Ethics approval was granted from the department's ethics committee for this work. We used data collected from a publicly available forum, and could not gain informed consent from all members as this would be considered to be spamming. As we only analyse posts as a collective whole, rather than identifying individual posts, under the British Society of Criminology's Statement of Ethics, this falls outside of the requirement of informed consent. We also avoid publishing details that could identify individuals, including usernames and original post contents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethics",
"sec_num": "3.2"
},
{
"text": "We first remove chunks of the forum post text which are not the main content of posts, including quote, link, and code blocks. These are identified by using regular expressions to identify relevant markup blocks. This approach is specific to the dataset we use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenisation and pre-processing",
"sec_num": "3.3"
},
{
"text": "Secondly, we tokenise the lowercased forum post text, using TweetTokeniser in NLTK (Bird et al., 2009) . This is suited to handling URLs and punctuation based emoticons in text. Note we do not remove a pre-defined list of stop words, however our Bayesian approach will decrease the relevance of a large number of very frequent words which appear equivalently in the prior and target texts.",
"cite_spans": [
{
"start": 83,
"end": 102,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenisation and pre-processing",
"sec_num": "3.3"
},
{
"text": "Following this, we carry out PoS-tagging using spaCy (Honnibal and Montani, 2017) to identify nouns and noun phrases in posts, which we filter results by. Note that we do not apply this step before calculating log-odds, as this would change the distribution of tokens used in a period, affecting the quality of results.",
"cite_spans": [
{
"start": 53,
"end": 81,
"text": "(Honnibal and Montani, 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenisation and pre-processing",
"sec_num": "3.3"
},
{
"text": "We store both the token counts and set of nouns for each post in the forum. These are stored separately for each subforum in HackForums. Note that we do not attempt to merge terms which may vary in their orthographic form -for instance acronyms or abbreviations with their full forms, spelling errors, and casing differences. It remains a matter for future investigation whether acronyms and abbreviations should always be associated with fully spelled-out forms, or whether they should be kept distinct because they represent different uses of the term. Secondly, we can introduce a spell-checker in future work to cluster misspelled words with their intended form, but this will need adaptation to the vocabulary of the cybersecurity domain. Finally, we do capture casing differences (e.g., \"WannaCry\" and \"wannacry\", and \"NHS\" and \"nhs\") because all texts are lower-cased before tokenisation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenisation and pre-processing",
"sec_num": "3.3"
},
{
"text": "The method requires the selection of two time windows: a prior and target period. The prior period is used to learn a distribution of terms used, as a comparator for the target period. The size and placement of windows can be varied depending on the desired results: long-term trend detection would have a longer, and more distant, target window than for short-term trend detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Windowing: Prior and Target",
"sec_num": "3.4"
},
{
"text": "These windows should be selected depending on the dataset used and research questions. If the prior window and target window overlap the same event, then these terms will appear in both windows with a similar frequency, and will therefore have low log-odds. If the prior and target window are too far apart, then the prior may not be representative, leading to poor quality results. Also, if a topic is re-trending, and the previous trending period falls in the prior, then this may affect whether a term appears to be trending.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Windowing: Prior and Target",
"sec_num": "3.4"
},
{
"text": "Our approach uses a method implemented in the tidylo R library by Silge et al. (2020) , which we have re-implemented in Python for compatibility with other tools. The tidylo R library uses an informative prior Bayesian approach, instead of the Dirichlet uninformative prior used by Monroe et al. (2008) . A later version of the tidylo library added support for the uninformative prior. However, we chose to continue using the Bayesian approach as our time-based application of the tool is suited to using an informative prior.",
"cite_spans": [
{
"start": 66,
"end": 85,
"text": "Silge et al. (2020)",
"ref_id": null
},
{
"start": 282,
"end": 302,
"text": "Monroe et al. (2008)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the log-odds method",
"sec_num": "3.5"
},
{
"text": "We adapt this approach, created for comparing two corpora, to detect trending tokens. Instead of selecting corpora by pre-existing classes, we choose prior and target time windows, to find terms which are more likely to appear in the prior or target period. Each period is represented as a \"bagof-words\", for all posts in the selected period. This Bayesian approach is shown in the following series of equations, based upon the tidylo implementation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the log-odds method",
"sec_num": "3.5"
},
{
"text": "For the corpus (combined set of posts in both periods) y, we define y w as the frequency of token w, and y wi as the frequency of the token w in period i. n is the sum of frequencies of tokens across all periods, and n i is the sum of frequencies of tokens in the period i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the log-odds method",
"sec_num": "3.5"
},
{
"text": "First, we calculate \u03c9 wi , the odds of each token appearing in period i, and \u03c9 w , the odds of each token appearing the corpus:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the log-odds method",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c9 wi = y wi n i \u2212 y wi (1) \u03c9 w = y w n \u2212 y w",
"eq_num": "(2)"
}
],
"section": "Overview of the log-odds method",
"sec_num": "3.5"
},
{
"text": "Secondly, we calculate \u03b4 wi , the log odds ratio to compare the usage of the token w in period i to the whole corpus:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the log-odds method",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b4 wi = log \u03c9 wi \u2212 log \u03c9 w",
"eq_num": "(3)"
}
],
"section": "Overview of the log-odds method",
"sec_num": "3.5"
},
{
"text": "Thirdly, we calculate the variance of our estimate, \u03c3 2 wi :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the log-odds method",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c3 2 wi = 1 y wi + 1 y w",
"eq_num": "(4)"
}
],
"section": "Overview of the log-odds method",
"sec_num": "3.5"
},
{
"text": "Finally, we calculate the log odds score \u03b6 wi for each token w in period i:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the log-odds method",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b6 wi = \u03b4 wi \u03c3 2 wi",
"eq_num": "(5)"
}
],
"section": "Overview of the log-odds method",
"sec_num": "3.5"
},
{
"text": "Depending on when the prior and target time windows occur, the tool will either pick up short or long term trending tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the log-odds method",
"sec_num": "3.5"
},
{
"text": "We evaluate the results of the tool by carrying out an information retrieval task with human annotators. We compare the log-odds approach with TF-IDF using discounted cumulative gain and the human annotations as a ground-truth ranking of identified terms. We use both a known cybersecurity event to define our target window, as well as a randomly-selected target window.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "Within CrimeBB, we selected HackForums, as this is widely studied in prior cybercrime literature (Pastrana et al., 2018a,b; Bhalerao et al., 2019) .",
"cite_spans": [
{
"start": 97,
"end": 123,
"text": "(Pastrana et al., 2018a,b;",
"ref_id": null
},
{
"start": 124,
"end": 146,
"text": "Bhalerao et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trending Event Selection",
"sec_num": "4.1"
},
{
"text": "First, for the known event we selected the spread of WannaCry in the year 2017. WannaCry is a type of ransomware, which encrypts data until the victim pays a ransom. WannaCry spreads through vulnerable computer systems, instead of directly targeting specific entities, where these systems have not previously updated their systems to patch this issue. One of the largest organisations affected by this attack was the National Health Service (NHS), the universal public healthcare system in the UK. We selected this event as we anticipated it would have been extensively covered on the forum. Indeed, it was later revealed that the individual who was instrumental in stopping the spread of Wan-naCry had formerly been an active forum member (Krebs, 2017) .",
"cite_spans": [
{
"start": 740,
"end": 753,
"text": "(Krebs, 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trending Event Selection",
"sec_num": "4.1"
},
{
"text": "The incident within the NHS began on Friday 12 May 2017 (Smart, 2018) , which we select as the start of the 7 day window for our analysis. We selected the \"News and Happenings\" subforum with a prior period of 2017-04-12 to 2017-04-18 and a target period of 2017-05-12 to 2017-05-18. The prior contains 404 posts, and the target contains 470 posts.",
"cite_spans": [
{
"start": 56,
"end": 69,
"text": "(Smart, 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trending Event Selection",
"sec_num": "4.1"
},
{
"text": "Secondly, we randomly selected a subforum, \"Monetizing Techniques\", and a random date range for the target (2016-12-23 to 2016-12-29 for the target, and a week in the previous month for the prior: 2016-11-23 to 2016-11-29) . The prior contains 195 posts and the target contains 295 posts.",
"cite_spans": [
{
"start": 190,
"end": 222,
"text": "prior: 2016-11-23 to 2016-11-29)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trending Event Selection",
"sec_num": "4.1"
},
{
"text": "We compare our approach to TF-IDF for topic ranking, using a similar approach to log-odds. This includes creating two TF-IDF \"documents\" as the set of posts for a given period (e.g. prior or target), as this is similar to the current method (frequent terms in the period but not frequent across all periods). We use the same tokenisation and pre-processing approach as the log-odds tool, to provide direct comparison. We selected TF-IDF, as it is a lightweight technique for topic ranking and detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-odds and TF-IDF Results",
"sec_num": "4.2"
},
{
"text": "For each event and technique, we plotted the top 10 tokens for the prior and target periods. For the \"WannaCry\" event, Figures 1 and 2 show the top tokens and scores for the prior and target periods. The results of the log-odds tool for the target period all contain tokens related to the WannaCry ransomware event. While TF-IDF also includes tokens related to the WannaCry ransomware event, it additionally contains terms related to different events (e.g., \"notebook\", \"pirates\", and \"sharing\"). Figures 3 and 4 show the top tokens and scores for the randomly selected event. ",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 134,
"text": "Figures 1 and 2",
"ref_id": null
},
{
"start": 497,
"end": 512,
"text": "Figures 3 and 4",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Log-odds and TF-IDF Results",
"sec_num": "4.2"
},
{
"text": "First, we generated a list of ranked terms from both the tool and from TF-IDF, selecting the union of For the WannaCry event, these were: amount, computer, cyber, data, hospital, hospitals, malware, microsoft, money, notebook, pirates, ransom, ransomware, security, sharing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "4.3"
},
{
"text": "For the randomly selected event, these were: affiliate, betting, burst, fiverr, guide, imdb, laptop, mining, movie, movies, network, time, traffic, videos, week.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "4.3"
},
{
"text": "For each event, we presented the three annotators with each post from the prior and target periods with the accompanying tags. The annotators selected the most salient tags for each post, leaving posts not annotated if there were no suitable salient tags. We measured inter-annotator agreement using multinomial Krippendorff's alpha with the MASI distance metric of sets (Passonneau, 2006) for comparison, finding an overall agreement of 0.833.",
"cite_spans": [
{
"start": 371,
"end": 389,
"text": "(Passonneau, 2006)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "4.3"
},
{
"text": "Using our annotations combined using majority voting, we compared the ranking of the log-odds tool against TF-IDF, using normalised discounted cumulative gain (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) . This is a metric used to evaluate the usefulness of a ranking of a list, by measuring the quality (salience) of tokens returned from the tool. We use discounted cumulative gain with the annotations of salient tokens, as the metric increases the weight of errors towards the top of the ranked list, compared to other rank correlation measures, such as Kendall's tau. Additionally, we do not have ground truth information on the ordering of all tokens.",
"cite_spans": [
{
"start": 159,
"end": 190,
"text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discounted Cumulative Gain",
"sec_num": "4.4"
},
{
"text": "For the WannaCry event, our log-odds tool scored 0.979 compared to TF-IDF of 0.877. For the random event, the log-odds tool scored 0.978 compared to TF-IDF of 0.753.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discounted Cumulative Gain",
"sec_num": "4.4"
},
{
"text": "For both events, the log-odds tool had a greater discounted cumulative gain score than the TF-IDF approach, finding the ranking of terms provided by the log-odds tool produced more relevant salient terms than the TF-IDF method, for our forum dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discounted Cumulative Gain",
"sec_num": "4.4"
},
{
"text": "Detecting trending topics on noisy social media data is not a new problem for information retrieval and NLP. However, we believe our application of an existing statistical method onto a longitudinal dataset provides a novel lightweight approach to detecting trending terms, which returns terms of more relevance than TF-IDF, and remains computationally less expensive than topic modelling such as LDA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "This work provided an initial step towards detecting temporal linguistic changes over time, by preprocessing text data, followed by using a Bayesian approach with a moving prior and target window depending on whether a user is observing short or long term trends. While our method does not identify the relevant windows itself, the tool can be combined with trending topic detection techniques to identify lexically distinct events, where some terms may re-trend.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Having shown that the statistical model is strong, and using a Bayesian approach can support new and evolving slang in the dataset without fine tuning a language model, we recognise that there are ways to further improve the NLP of cybersecurity forum texts. For instance, we can improve preprocessing in order to better deal with noisy texts: this includes the detection of misspellings, orthographic variation, acronyms and abbreviation, and deliberate obfuscation such as leetspeak. In addition, we can start to incorporate the detection of multiword expressions and named entity recognition techniques for noisy language, since both are likely to be of interest to researchers analysing language use in cybersecurity forums.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In future work we aim to increase understanding of the evolution of forums, changing language over time, and the changing topics of discussion by forum members. We also aim to automatically detect and extract events in the CrimeBB dataset. Although we have focused on analysing forum data, the tool can be used to explore trends in other cor-pora. In future work, we plan to use this approach to analyse how spam emails have changed following the COVID-19 pandemic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In this work, we presented a new use-case for the log-odds tool introduced by Monroe et al. (2008) and implemented in the tidylo R library by Silge et al. (2020) , for detecting trending terms in longitudinal historical noisy text data of an underground hacking forum. The tool can be used for both detecting short term and long term trends depending on the time windowing and separation of windows selected. Using annotations of salient terms during both discussion of WannaCry, and a randomly chosen duration, we found our approach to produce more relevant salient terms over TF-IDF.",
"cite_spans": [
{
"start": 92,
"end": 98,
"text": "(2008)",
"ref_id": null
},
{
"start": 142,
"end": 161,
"text": "Silge et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The texts are fabricated so as to preserve user anonymity, but they are based on real ones we have encountered in the database.3 A forum is the whole site, and a subforum or bulletin board is a page on the site, dedicated to a given general topic and created by the administrators. Subforums contain membercreated threads consisting of an ordered set of posts typically focused on a single topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.cambridgecybercrime.uk",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the Cambridge Cybercrime Centre for access to the CrimeBB dataset. We also thank our colleagues at the Cambridge Cybercrime Centre. This work was supported by the Economic and Social Research Council (ESRC), grant number ES/T008466/1. The third and fourth authors are supported by Cambridge Assessment, University of Cambridge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A multi-task approach for named entity recognition in social media data",
"authors": [
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Maharjan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 3rd Workshop on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "148--153",
"other_ids": {
"DOI": [
"10.18653/v1/W17-4419"
]
},
"num": null,
"urls": [],
"raw_text": "Gustavo Aguilar, Suraj Maharjan, Adrian Pastor L\u00f3pez- Monroy, and Thamar Solorio. 2017. A multi-task ap- proach for named entity recognition in social media data. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 148-153, Copenhagen, Denmark. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Ioannis Kompatsiaris, and Alejandro Jaimes. 2013. Sensing Trending Topics in Twitter. IEEE Transactions on Multimedia",
"authors": [
{
"first": "Georgios",
"middle": [],
"last": "Luca Maria Aiello",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Petkos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Symeon",
"middle": [],
"last": "Corney",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Papadopoulos",
"suffix": ""
},
{
"first": "Ayse",
"middle": [],
"last": "Skraba",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goker",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "15",
"issue": "",
"pages": "1268--1282",
"other_ids": {
"DOI": [
"10.1109/TMM.2013.2265080"
]
},
"num": null,
"urls": [],
"raw_text": "Luca Maria Aiello, Georgios Petkos, Carlos Martin, David Corney, Symeon Papadopoulos, Ryan Skraba, Ayse Goker, Ioannis Kompatsiaris, and Alejandro Jaimes. 2013. Sensing Trending Topics in Twit- ter. IEEE Transactions on Multimedia, 15(6):1268- 1282.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Corpus and Deep Learning Classifier for Collection of Cyber Threat Indicators in Twitter Stream",
"authors": [
{
"first": "Vahid",
"middle": [],
"last": "Behzadan",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Aguirre",
"suffix": ""
},
{
"first": "Avishek",
"middle": [],
"last": "Bose",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE International Conference on Big Data (Big Data)",
"volume": "",
"issue": "",
"pages": "5002--5007",
"other_ids": {
"DOI": [
"10.1109/BigData.2018.8622506"
]
},
"num": null,
"urls": [],
"raw_text": "Vahid Behzadan, Carlos Aguirre, Avishek Bose, and William Hsu. 2018. Corpus and Deep Learning Clas- sifier for Collection of Cyber Threat Indicators in Twitter Stream. In 2018 IEEE International Con- ference on Big Data (Big Data), pages 5002-5007, Seattle, WA, USA. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Mapping the underground: Supervised discovery of cybercrime supply chains",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bhalerao",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Aliapoulios",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Shumailov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Afroz",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mccoy",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 APWG Symposium on Electronic Crime Research (eCrime)",
"volume": "",
"issue": "",
"pages": "1--16",
"other_ids": {
"DOI": [
"10.1109/eCrime47957.2019.9037582"
]
},
"num": null,
"urls": [],
"raw_text": "R. Bhalerao, M. Aliapoulios, I. Shumailov, S. Afroz, and D. McCoy. 2019. Mapping the underground: Supervised discovery of cybercrime supply chains. In 2019 APWG Symposium on Electronic Crime Re- search (eCrime), pages 1-16.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Tkol, httt, and r/radiohead: High affinity terms in Reddit communities",
"authors": [
{
"first": "Abhinav",
"middle": [],
"last": "Bhandari",
"suffix": ""
},
{
"first": "Caitrin",
"middle": [],
"last": "Armstrong",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)",
"volume": "",
"issue": "",
"pages": "57--67",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5508"
]
},
"num": null,
"urls": [],
"raw_text": "Abhinav Bhandari and Caitrin Armstrong. 2019. Tkol, httt, and r/radiohead: High affinity terms in Reddit communities. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 57-67, Hong Kong, China. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Edward Loper, and Ewan Klein. 2009. Natural Language Processing with Python. O'Reilly Media Inc.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A novel approach for detection and ranking of trendy and emerging cyber threat events in twitter streams",
"authors": [
{
"first": "Avishek",
"middle": [],
"last": "Bose",
"suffix": ""
},
{
"first": "Vahid",
"middle": [],
"last": "Behzadan",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Aguirre",
"suffix": ""
},
{
"first": "William",
"middle": [
"H"
],
"last": "Hsu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3341161.3344379"
]
},
"num": null,
"urls": [],
"raw_text": "Avishek Bose, Vahid Behzadan, Carlos Aguirre, and William H. Hsu. 2019. A novel approach for detec- tion and ranking of trendy and emerging cyber threat events in twitter streams. In Proceedings of the 2019",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM '19",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "871--878",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM '19, page 871-878, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatically identifying the function and intent of posts in underground forums",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Caines",
"suffix": ""
},
{
"first": "Sergio",
"middle": [],
"last": "Pastrana",
"suffix": ""
},
{
"first": "Alice",
"middle": [],
"last": "Hutchings",
"suffix": ""
},
{
"first": "Paula",
"middle": [
"J"
],
"last": "Buttery",
"suffix": ""
}
],
"year": 2018,
"venue": "Crime Science",
"volume": "7",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/s40163-018-0094-4"
]
},
"num": null,
"urls": [],
"raw_text": "Andrew Caines, Sergio Pastrana, Alice Hutchings, and Paula J. Buttery. 2018. Automatically identifying the function and intent of posts in underground fo- rums. Crime Science, 7(1):19.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Results of the WNUT2017 shared task on novel and emerging entity recognition",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nichols",
"suffix": ""
},
{
"first": "Marieke",
"middle": [],
"last": "Van Erp",
"suffix": ""
},
{
"first": "Nut",
"middle": [],
"last": "Limsopatham",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 3rd Workshop on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "140--147",
"other_ids": {
"DOI": [
"10.18653/v1/W17-4418"
]
},
"num": null,
"urls": [],
"raw_text": "Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the WNUT2017 shared task on novel and emerging entity recogni- tion. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 140-147, Copenhagen, Denmark. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Interactive topic modeling",
"authors": [
{
"first": "Yuening",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Brianna",
"middle": [],
"last": "Satinoff",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Machine Learning",
"volume": "95",
"issue": "",
"pages": "423--469",
"other_ids": {
"DOI": [
"10.1007/s10994-013-5413-0"
]
},
"num": null,
"urls": [],
"raw_text": "Yuening Hu, Jordan Boyd-Graber, Brianna Satinoff, and Alison Smith. 2014. Interactive topic modeling. Machine Learning, 95(3):423-469.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Distributed representation, LDA topic modelling and deep learning for emerging named entity recognition from social media",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Jansson",
"suffix": ""
},
{
"first": "Shuhua",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 3rd Workshop on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "154--159",
"other_ids": {
"DOI": [
"10.18653/v1/W17-4420"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Jansson and Shuhua Liu. 2017. Distributed rep- resentation, LDA topic modelling and deep learning for emerging named entity recognition from social media. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 154-159, Copenhagen, Denmark. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Cumulated gain-based evaluation of ir techniques",
"authors": [
{
"first": "Kalervo",
"middle": [],
"last": "J\u00e4rvelin",
"suffix": ""
},
{
"first": "Jaana",
"middle": [],
"last": "Kek\u00e4l\u00e4inen",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Trans. Inf. Syst",
"volume": "20",
"issue": "4",
"pages": "422--446",
"other_ids": {
"DOI": [
"10.1145/582415.582418"
]
},
"num": null,
"urls": [],
"raw_text": "Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cumu- lated gain-based evaluation of ir techniques. ACM Trans. Inf. Syst., 20(4):422-446.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bursty and Hierarchical Structure in Streams",
"authors": [
{
"first": "Jon",
"middle": [],
"last": "Kleinberg",
"suffix": ""
}
],
"year": 2003,
"venue": "Data Mining and Knowledge Discovery",
"volume": "7",
"issue": "",
"pages": "373--397",
"other_ids": {
"DOI": [
"10.1023/A:1024940629314"
]
},
"num": null,
"urls": [],
"raw_text": "Jon Kleinberg. 2003. Bursty and Hierarchical Struc- ture in Streams. Data Mining and Knowledge Dis- covery, 7(4):373-397.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Temporal Dynamics of On-Line Information Streams",
"authors": [
{
"first": "Jon",
"middle": [],
"last": "Kleinberg",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "221--238",
"other_ids": {
"DOI": [
"10.1007/978-3-540-28608-0_11"
]
},
"num": null,
"urls": [],
"raw_text": "Jon Kleinberg. 2016. Temporal Dynamics of On- Line Information Streams, pages 221-238. Springer Berlin Heidelberg, Berlin, Heidelberg.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Time series topic modeling and bursty topic detection of correlated news and twitter",
"authors": [
{
"first": "Daichi",
"middle": [],
"last": "Koike",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Takahashi",
"suffix": ""
},
{
"first": "Takehito",
"middle": [],
"last": "Utsuro",
"suffix": ""
},
{
"first": "Masaharu",
"middle": [],
"last": "Yoshioka",
"suffix": ""
},
{
"first": "Noriko",
"middle": [],
"last": "Kando",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "917--921",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daichi Koike, Yusuke Takahashi, Takehito Utsuro, Masaharu Yoshioka, and Noriko Kando. 2013. Time series topic modeling and bursty topic detection of correlated news and twitter. In Proceedings of the Sixth International Joint Conference on Natural Lan- guage Processing, pages 917-921, Nagoya, Japan. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Who Is Marcus Hutchins?",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Krebs",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Krebs. 2017. Who Is Marcus Hutchins? https://krebsonsecurity.com/2017/09/ who-is-marcus-hutchins/.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Identifying Digital Threats in a Hacker Web Forum",
"authors": [
{
"first": "Mitch",
"middle": [],
"last": "Macdonald",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Monk",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2808797.2808878"
]
},
"num": null,
"urls": [],
"raw_text": "Mitch Macdonald, Richard Frank, Joseph Mei, and Bryan Monk. 2015. Identifying Digital Threats in a Hacker Web Forum. In Proceedings of the 2015",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015 -ASONAM '15",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "926--933",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015 - ASONAM '15, pages 926-933, Paris, France. ACM Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Mining Newsworthy Topics from Social Media",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Corney",
"suffix": ""
},
{
"first": "Ayse",
"middle": [],
"last": "Goker",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Social Media Analysis",
"volume": "602",
"issue": "",
"pages": "21--43",
"other_ids": {
"DOI": [
"http://link.springer.com/10.1007/978-3-319-18458-6_2"
]
},
"num": null,
"urls": [],
"raw_text": "Carlos Martin, David Corney, and Ayse Goker. 2015. Mining Newsworthy Topics from Social Media. In Advances in Social Media Analysis, volume 602 of Studies in Computational Intelligence, pages 21-43. Springer International Publishing, Cham.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Fightin' Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict",
"authors": [
{
"first": "L",
"middle": [],
"last": "Burt",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"P"
],
"last": "Monroe",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"M"
],
"last": "Colaresi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Quinn",
"suffix": ""
}
],
"year": 2008,
"venue": "Political Analysis",
"volume": "16",
"issue": "4",
"pages": "372--403",
"other_ids": {
"DOI": [
"10.1093/pan/mpn018"
]
},
"num": null,
"urls": [],
"raw_text": "Burt L. Monroe, Michael P. Colaresi, and Kevin M. Quinn. 2008. Fightin' Words: Lexical Feature Se- lection and Evaluation for Identifying the Content of Political Conflict. Political Analysis, 16(4):372- 403.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Passonneau",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Passonneau. 2006. Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06), Genoa, Italy. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Characterizing Eve: Analysing Cybercrime Actors in a Large Underground Forum",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Pastrana",
"suffix": ""
},
{
"first": "Alice",
"middle": [],
"last": "Hutchings",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Caines",
"suffix": ""
},
{
"first": "Paula",
"middle": [],
"last": "Buttery",
"suffix": ""
}
],
"year": 2018,
"venue": "Research in Attacks, Intrusions, and Defenses",
"volume": "11050",
"issue": "",
"pages": "207--227",
"other_ids": {
"DOI": [
"10.1007/978-3-030-00470-5_10"
]
},
"num": null,
"urls": [],
"raw_text": "Sergio Pastrana, Alice Hutchings, Andrew Caines, and Paula Buttery. 2018a. Characterizing Eve: Analysing Cybercrime Actors in a Large Under- ground Forum. In Research in Attacks, Intru- sions, and Defenses, volume 11050, pages 207-227. Springer International Publishing, Cham.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "CrimeBB: Enabling Cybercrime Research on Underground Forums at Scale",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Pastrana",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"R"
],
"last": "Thomas",
"suffix": ""
},
{
"first": "Alice",
"middle": [],
"last": "Hutchings",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Clayton",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3178876.3186178"
]
},
"num": null,
"urls": [],
"raw_text": "Sergio Pastrana, Daniel R. Thomas, Alice Hutchings, and Richard Clayton. 2018b. CrimeBB: Enabling Cybercrime Research on Underground Forums at Scale. In Proceedings of The Web Conference 2018, Lyon, France.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "DISCOVER: Mining Online Chatter for Emerging Cyber Threats",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Sapienza",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sindhu Kiranmai Ernala",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Bessi",
"suffix": ""
},
{
"first": "Emilio",
"middle": [],
"last": "Lerman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ferrara",
"suffix": ""
}
],
"year": 2018,
"venue": "Companion of the The Web Conference 2018 on The Web Conference 2018 -WWW '18",
"volume": "",
"issue": "",
"pages": "983--990",
"other_ids": {
"DOI": [
"10.1145/3184558.3191528"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Sapienza, Sindhu Kiranmai Ernala, Alessandro Bessi, Kristina Lerman, and Emilio Ferrara. 2018. DISCOVER: Mining Online Chatter for Emerging Cyber Threats. In Companion of the The Web Con- ference 2018 on The Web Conference 2018 -WWW '18, pages 983-990, Lyon, France. ACM Press.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Peaks and persistence: modeling the shape of microblog conversations",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Shamma",
"suffix": ""
},
{
"first": "Lyndon",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [
"F"
],
"last": "Churchill",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the ACM 2011 conference on Computer supported cooperative work -CSCW '11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/1958824.1958878"
]
},
"num": null,
"urls": [],
"raw_text": "David A. Shamma, Lyndon Kennedy, and Elizabeth F. Churchill. 2011. Peaks and persistence: modeling the shape of microblog conversations. In Proceed- ings of the ACM 2011 conference on Computer sup- ported cooperative work -CSCW '11, page 355, Hangzhou, China. ACM Press.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "2020. tidylo: Weighted Tidy Log Odds Ratio",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Silge",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Hayes",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Schnoebelen",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Silge, Alex Hayes, and Tyler Schnoebelen. 2020. tidylo: Weighted Tidy Log Odds Ratio. https:// github.com/juliasilge/tidylo.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Lessons learned review of the WannaCry Ransomware Cyber Attack. Technical report, Department of Health and Social Care",
"authors": [
{
"first": "William",
"middle": [],
"last": "Smart",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Smart. 2018. Lessons learned review of the WannaCry Ransomware Cyber Attack. Technical re- port, Department of Health and Social Care.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A statistical interpretation of term specificity and its application in retrieval",
"authors": [
{
"first": "Karen Sp\u00e4rck",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1972,
"venue": "Journal of Documentation",
"volume": "28",
"issue": "",
"pages": "11--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Sp\u00e4rck Jones. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28:11-21.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Applying a burst model to detect bursty topics in a topic model",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Takahashi",
"suffix": ""
},
{
"first": "Takehito",
"middle": [],
"last": "Utsuro",
"suffix": ""
},
{
"first": "Masaharu",
"middle": [],
"last": "Yoshioka",
"suffix": ""
},
{
"first": "Noriko",
"middle": [],
"last": "Kando",
"suffix": ""
},
{
"first": "Tomohiro",
"middle": [],
"last": "Fukuhara",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Nakagawa",
"suffix": ""
},
{
"first": "Yoji",
"middle": [],
"last": "Kiyota",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "239--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Takahashi, Takehito Utsuro, Masaharu Yosh- ioka, Noriko Kando, Tomohiro Fukuhara, Hiroshi Nakagawa, and Yoji Kiyota. 2012. Applying a burst model to detect bursty topics in a topic model. In Ad- vances in Natural Language Processing, pages 239- 249, Berlin, Heidelberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Characterizing activity on the deep and dark web",
"authors": [
{
"first": "Nazgol",
"middle": [],
"last": "Tavabi",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Bartley",
"suffix": ""
},
{
"first": "Andres",
"middle": [],
"last": "Abeliuk",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Soni",
"suffix": ""
},
{
"first": "Emilio",
"middle": [],
"last": "Ferrara",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Lerman",
"suffix": ""
}
],
"year": 2019,
"venue": "Companion Proceedings of The 2019 World Wide Web Conference, WWW '19",
"volume": "",
"issue": "",
"pages": "206--213",
"other_ids": {
"DOI": [
"10.1145/3308560.3316502"
]
},
"num": null,
"urls": [],
"raw_text": "Nazgol Tavabi, Nathan Bartley, Andres Abeliuk, Sandeep Soni, Emilio Ferrara, and Kristina Lerman. 2019. Characterizing activity on the deep and dark web. In Companion Proceedings of The 2019 World Wide Web Conference, WWW '19, page 206-213, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Incorporating metadata into content-based user embeddings",
"authors": [
{
"first": "Linzi",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Paul",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 3rd Workshop on Noisy Usergenerated Text",
"volume": "",
"issue": "",
"pages": "45--49",
"other_ids": {
"DOI": [
"10.18653/v1/W17-4406"
]
},
"num": null,
"urls": [],
"raw_text": "Linzi Xing and Michael J. Paul. 2017. Incorporating metadata into content-based user embeddings. In Proceedings of the 3rd Workshop on Noisy User- generated Text, pages 45-49, Copenhagen, Den- mark. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Random Event with log-odds",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Random Event with TF-IDF the top 10 terms of each event.",
"num": null,
"uris": null,
"type_str": "figure"
}
}
}
}