Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N16-1045",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:38:36.734332Z"
},
"title": "Expectation-Regulated Neural Model for Event Mention Extraction",
"authors": [
{
"first": "Ching-Yun",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Zhiyang",
"middle": [],
"last": "Teng",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We tackle the task of extracting tweets that mention a specific event from all tweets that contain relevant keywords, for which the main challenges include unbalanced positive and negative cases, and the unavailability of manually labeled training data. Existing methods leverage a few manually given seed events and large unlabeled tweets to train a classifier, by using expectation regularization training with discrete ngram features. We propose a LSTM-based neural model that learns tweet-level features automatically. Compared with discrete ngram features, the neural model can potentially capture non-local dependencies and deep semantic information, which are more effective for disambiguating subtle semantic differences between true event mentions and false cases that use similar wording patterns. Results on both tweets and forum posts show that our neural model is more effective compared with a state-of-the-art discrete baseline.",
"pdf_parse": {
"paper_id": "N16-1045",
"_pdf_hash": "",
"abstract": [
{
"text": "We tackle the task of extracting tweets that mention a specific event from all tweets that contain relevant keywords, for which the main challenges include unbalanced positive and negative cases, and the unavailability of manually labeled training data. Existing methods leverage a few manually given seed events and large unlabeled tweets to train a classifier, by using expectation regularization training with discrete ngram features. We propose a LSTM-based neural model that learns tweet-level features automatically. Compared with discrete ngram features, the neural model can potentially capture non-local dependencies and deep semantic information, which are more effective for disambiguating subtle semantic differences between true event mentions and false cases that use similar wording patterns. Results on both tweets and forum posts show that our neural model is more effective compared with a state-of-the-art discrete baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A Distributed Denial of Service (DDoS) attack employs multiple compromised systems to interrupt or suspend services of a host connected to the Internet. Victims are often high-profile web servers such as banks or credit card payment gateways, and therefore a single attack may cause considerable loss. The aim of this paper is to build an automatic system which can extract DDoS event mentions from social media, a timely information source for events taking place around the world, so that the mined emerging incidents can serve as early DDoS warnings or signs for Internet service providers. Ritter et al. (2015) proposed the first work to extract cybersecurity event mentions from raw Twitter stream. They investigated three different event categories, namely DDoS attacks, data breaches and account hijacking, by tracking the keywords ddos, breach and hacked, respectively. Not all tweets containing the keywords describe events. For example, the tweet \"give me paypall or i will tell my mum and ddos u\" shows a metaphor rather than a DDoS event. As a result, the event mention extraction task involves a classification task that filters out true events from all tweets that contain event keywords. Two main challenges exist for this task. First, the numbers of positive and negative examples are typically unbalanced. In our datasets, only about 22% of the tweets that contain the term ddos are mentions to DDoS attack events. Second, there is typically little manual annotation available. Ritter et al. (2015) tackled the challenges by weakly supervising a classification model with a small number of human-provided seed events.",
"cite_spans": [
{
"start": 594,
"end": 614,
"text": "Ritter et al. (2015)",
"ref_id": "BIBREF41"
},
{
"start": 1495,
"end": 1515,
"text": "Ritter et al. (2015)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, Ritter et al. exploit expectation regularization (ER; Mann and McCallum (2007) ) for semi-supervised learning from large amounts of raw tweets that contain the event keyword. They show that the ER approach outperforms semisupervised expectation-maximization and one-class support vector machine on the task. They build a logistic regression classifier, using few humanlabeled seed events and domain knowledge on the ratio between positive and negative examples for ER in training. Results show that the regulariza-tion method was effective on classifying unbalanced datasets.",
"cite_spans": [
{
"start": 15,
"end": 68,
"text": "Ritter et al. exploit expectation regularization (ER;",
"ref_id": null
},
{
"start": 69,
"end": 93,
"text": "Mann and McCallum (2007)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Ritter et al. use manually-defined discrete features. However, the event mention extraction task is highly semantic-driven, and simple textual patterns may suffer limitations in representing subtle semantic differences between true event mentions and false cases with similar word patterns. Recently, deep learning received increasing research attention in the NLP community (Bengio, 2009; Mikolov et al., 2013; Pennington et al., 2014; Kalchbrenner et al., 2014; Vo and Zhang, 2015) . One important advantage of deep learning is automatic representation learning, which can effectively encodes syntactic and information about words, phrases and sentences in low-dimensional dense vectors.",
"cite_spans": [
{
"start": 375,
"end": 389,
"text": "(Bengio, 2009;",
"ref_id": "BIBREF0"
},
{
"start": 390,
"end": 411,
"text": "Mikolov et al., 2013;",
"ref_id": "BIBREF32"
},
{
"start": 412,
"end": 436,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF36"
},
{
"start": 437,
"end": 463,
"text": "Kalchbrenner et al., 2014;",
"ref_id": "BIBREF24"
},
{
"start": 464,
"end": 483,
"text": "Vo and Zhang, 2015)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we exploit a deep neural model for event mention extraction, using word embeddings and a novel LSTM-based neural network structure to automatically obtain features for a tweet. Results on two human-annotated datasets show that the proposed LSTM-based representation yields significant improvements over Ritter et al. (2015) .",
"cite_spans": [
{
"start": 317,
"end": 337,
"text": "Ritter et al. (2015)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In terms of scope, our work falls into the area of information extraction from social media (Guo et al., 2013; Li et al., 2015) . The proposed event mention extraction system is domain-specific, similar to works that aim at detecting categorized events such as disaster outbreak (Sakaki et al., 2010; Neubig et al., 2011; Li and Cardie, 2013) and cybersecurity events (Ritter et al., 2015) . Such work typically trains semi-supervised classifiers to determine events of interest due to the limitation of annotated data. On the other hand, a few studies devote to open domain event extraction (Benson et al., 2011; Ritter et al., 2012; Petrovi\u0107 et al., 2010; Diao et al., 2012; Chierichetti et al., 2014; Li et al., 2014; Qiu and Zhang, 2014) , in which an event category is not predefined, and clustering models are applied to automatically induce event types.",
"cite_spans": [
{
"start": 92,
"end": 110,
"text": "(Guo et al., 2013;",
"ref_id": "BIBREF19"
},
{
"start": 111,
"end": 127,
"text": "Li et al., 2015)",
"ref_id": "BIBREF11"
},
{
"start": 279,
"end": 300,
"text": "(Sakaki et al., 2010;",
"ref_id": "BIBREF45"
},
{
"start": 301,
"end": 321,
"text": "Neubig et al., 2011;",
"ref_id": "BIBREF33"
},
{
"start": 322,
"end": 342,
"text": "Li and Cardie, 2013)",
"ref_id": "BIBREF27"
},
{
"start": 368,
"end": 389,
"text": "(Ritter et al., 2015)",
"ref_id": "BIBREF41"
},
{
"start": 592,
"end": 613,
"text": "(Benson et al., 2011;",
"ref_id": "BIBREF1"
},
{
"start": 614,
"end": 634,
"text": "Ritter et al., 2012;",
"ref_id": "BIBREF40"
},
{
"start": 635,
"end": 657,
"text": "Petrovi\u0107 et al., 2010;",
"ref_id": "BIBREF37"
},
{
"start": 658,
"end": 676,
"text": "Diao et al., 2012;",
"ref_id": "BIBREF10"
},
{
"start": 677,
"end": 703,
"text": "Chierichetti et al., 2014;",
"ref_id": "BIBREF4"
},
{
"start": 704,
"end": 720,
"text": "Li et al., 2014;",
"ref_id": "BIBREF28"
},
{
"start": 721,
"end": 741,
"text": "Qiu and Zhang, 2014)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In terms of method, the proposed model is in line with recent methods on deep learning for neural feature representations, which have seen success in some NLP tasks (Collobert and Weston, 2008; Collobert et al., 2011; Chen and Manning, 2014) .",
"cite_spans": [
{
"start": 165,
"end": 193,
"text": "(Collobert and Weston, 2008;",
"ref_id": "BIBREF6"
},
{
"start": 194,
"end": 217,
"text": "Collobert et al., 2011;",
"ref_id": "BIBREF7"
},
{
"start": 218,
"end": 241,
"text": "Chen and Manning, 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Competitive results have been obtained in sentiment analysis (Kalchbrenner et al., 2014; Kim, 2014; Socher et al., 2013b) , semantic relation classification (Hashimoto et al., 2013; Liu et al., 2015) , and question answering (Dong et al., 2015; Iyyer et al., 2014) . In addition, deep learning models have shown promising results on syntactic parsing (Dyer et al., 2015; and machine translation (Cho et al., 2014) . Compared to syntactic problems, semantic tasks see relatively larger improvements by using neural architectures, possible because of the capability of neural features in better representing semantic information, which is relatively more difficult to capture by discrete indicator features. We consider event mention extraction as a semantic-heavy task and demonstrate that it can benefit significantly from neural feature representations.",
"cite_spans": [
{
"start": 61,
"end": 88,
"text": "(Kalchbrenner et al., 2014;",
"ref_id": "BIBREF24"
},
{
"start": 89,
"end": 99,
"text": "Kim, 2014;",
"ref_id": "BIBREF25"
},
{
"start": 100,
"end": 121,
"text": "Socher et al., 2013b)",
"ref_id": "BIBREF47"
},
{
"start": 157,
"end": 181,
"text": "(Hashimoto et al., 2013;",
"ref_id": "BIBREF20"
},
{
"start": 182,
"end": 199,
"text": "Liu et al., 2015)",
"ref_id": "BIBREF30"
},
{
"start": 225,
"end": 244,
"text": "(Dong et al., 2015;",
"ref_id": "BIBREF11"
},
{
"start": 245,
"end": 264,
"text": "Iyyer et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 351,
"end": 370,
"text": "(Dyer et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 395,
"end": 413,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We take the method of Ritter et al. (2015) as a baseline. Given a tweet containing the keyword ddos, the task is to determine whether a DDoS attack event is mentioned in the tweet. A logistic regression classifier is used, which is trained by maximum-likelihood with ER on unlabeled tweets, and automatically generated positive examples from a few seed events.",
"cite_spans": [
{
"start": 22,
"end": 42,
"text": "Ritter et al. (2015)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "3"
},
{
"text": "Ritter et al. (2015) manually pick seed events, represented as (ENTITY, DATE) tuples, and treated tweets published on DATE referencing ENTITY as positive training instances. For example, (GitHub, 2013 July 29) 1 is defined as a seed DDoS event, and the tweet \"@amosie GitHub is experiencing a large DDos https://t.co/cqEIR6Rz6t\" posted on 2013 July 29 is seen as an event mention since it contains the EN-TITY GitHub as well as matches the DATE 2013 July 29. Those tweets with the word ddos but not matching any seed events are grouped as unlabeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seed Events",
"sec_num": "3.1"
},
{
"text": "Each tweet is represented by a sparse binary vector for feature extraction, where the features consist of bi-to five-grams containing a name entity or the event keyword. For better generalization, all Ritter et al. (2015) .",
"cite_spans": [
{
"start": 201,
"end": 221,
"text": "Ritter et al. (2015)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sparse Feature Representation",
"sec_num": "3.2"
},
{
"text": "words other than common nouns and verbs are replaced with their part-of-speech (POS) tags. Table 1 shows an example of contextual features extracted from the tweet \"@amosie GitHub is experiencing a large DDos https://t.co/cqEIR6Rz6t\". As can be seen from the table, the features contain shallow wording patterns from a tweet, which are local to a 5-word window. In contrast, the observed average tweet length is 16 words, with the longest tweet containing 48 words, which is difficult to fully represent using only a local window. Our neural model addresses the limitations by learning global tweetlevel syntactic and semantic features automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sparse Feature Representation",
"sec_num": "3.2"
},
{
"text": "With the feature vector f s \u2208 R d defined for a given tweet s, the probability of s being an event mention is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression Classification with Expectation Regularization",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u03b8 (y = 1|s) = 1 1 + e \u2212 \u03b8 fs",
"eq_num": "(1)"
}
],
"section": "Logistic Regression Classification with Expectation Regularization",
"sec_num": "3.3"
},
{
"text": "where \u03b8 \u2208 R d is a weight vector. Given a set of event mentions M = m 1 , m 2 , ..., m j and a set of unlabeled instances Ritter et al. (2015) train an ER model that maximizes the log-likelihood of positive data while keeping the conditional probabilities on unlabeled data consistent with the human-provided expectations. The objective function is defined as:",
"cite_spans": [
{
"start": 122,
"end": 142,
"text": "Ritter et al. (2015)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression Classification with Expectation Regularization",
"sec_num": "3.3"
},
{
"text": "U = u 1 , u 2 , ..., u k ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression Classification with Expectation Regularization",
"sec_num": "3.3"
},
{
"text": "O(\u03b8; M, U ) = m\u2208M log p \u03b8 (y = 1|m) Log Likelihood \u2212 \u03bb U \u2206(p,p U \u03b8 ) Expectation Regularization \u2212 \u03bb L 2 \u03b8 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression Classification with Expectation Regularization",
"sec_num": "3.3"
},
{
"text": "We follow Ritter et al. 2015, using a set of seed events and large raw tweets for ER. However, we take a fully-automated approach to find seed events, since manual listing of seed DDoS events can be a costly and time consuming process, and requires a certain level of expert knowledge. We leverage news articles to collect seed events, representing events as (ENTITY, DATE RANGE) tuples. The ENTITY in our seed events is defined as a name entity that appears in either the assailant or victim role of an attack event labeled by framesemantic parsing, and the DATE RANGE is a date window around the news publication date. We use a date window rather than a definite news publication date because news articles are not always published on the day a DDoS attack happened. Some examples are given in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 796,
"end": 804,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Logistic Regression Classification with Expectation Regularization",
"sec_num": "3.3"
},
{
"text": "We parse DDoS attack news collected from http://www.ddosattacks.net 2 with a state-of-the-art frame-semantic parsing system (SEMAFOR; Das et al. (2010) ). Tweets are gathered using the Twitter Streaming API 3 with a case-insensitive track keyword ddos. Name entities are extracted from both news articles and tweets using a Twitter-tuned NLP pipeline (Ritter et al., 2011 ). 4 Table 2 shows two example DDoS attack news, where the ENTITY values are included in the victim roles, RBS, Ulster Bank, GovCERT and FBI in the first news, and Essex in the second. It is worth noting that the DDoS attack on RBS, Ulster Bank and Natwest was actually on 2015 July 31. The correlation between tweet mentions and news reports are shown in Figure 1 , where each bar indicates the number of tweets (y-axis) containing a certain EN-TITY posted on a certain DATE (x-axis). According to these, we used a 11-day (-3,7) window centered at the news publication date for extracting positive training instances. Experiments show that our method can find seed events with 97% accuracy.",
"cite_spans": [
{
"start": 134,
"end": 151,
"text": "Das et al. (2010)",
"ref_id": "BIBREF8"
},
{
"start": 351,
"end": 371,
"text": "(Ritter et al., 2011",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 377,
"end": 384,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 728,
"end": 736,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Logistic Regression Classification with Expectation Regularization",
"sec_num": "3.3"
},
{
"text": "The overall structure of our representation learning model is shown in Figure 2 . Given a tweet, two LSTM models (Section 5.1) are used to capture its sequential semantic information in the left-to-right and right-to-left directions, respectively. For deep 2) is performed on each LSTM layer to extract rich features. Finally, features from the leftto-right and right-to-left components are combined using neural tensors (Section 5.3), and the resulting features are used as inputs to a feed-forward neural network for classification (Section 5.4).",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 79,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Neural Event Mention Extraction",
"sec_num": "5"
},
{
"text": "The main goal of our neural model is to find dense vector representations for tweets, which are effective features for event mention extraction. Starting from word embeddings (Mikolov et al., 2013; Pennington et al., 2014) , a natural way of modeling a tweet is to treat it as a sequence and use a recurrent neural network (RNN) structure (Pearlmutter, 1989) . LSTM (Hochreiter and Schmidhuber, 1997) is a variant of RNNs, which is better at exploiting long range context thanks to purpose-built units called memory blocks to store history information.",
"cite_spans": [
{
"start": 175,
"end": 197,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF32"
},
{
"start": 198,
"end": 222,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF36"
},
{
"start": 339,
"end": 358,
"text": "(Pearlmutter, 1989)",
"ref_id": "BIBREF35"
},
{
"start": 366,
"end": 400,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM Models",
"sec_num": "5.1"
},
{
"text": "LSTM has shown improvements over conventional RNN in many NLP tasks (Jozefowicz et al., 2015; Graves et al., 2013b; Cho et al., 2014) . A typical LSTM memory block consists of three gates (i.e. input, forget and output), which control the flow of information, and a memory cell to store the temporal state of the network . While traditionally the values of gates are decided by the input and hidden states in a RNN, we take a variation with peephole connections , which allows gates in the same memory block to learn from the current cell state. In addition, to simplify model complexity, we use coupled forget and input gates (Cho et al., 2014) . Figure 3 illustrates the memory block used for our tweet representation. The network unit activations for input x t at time step t are defined by the following set of equations: Gates at step t:",
"cite_spans": [
{
"start": 68,
"end": 93,
"text": "(Jozefowicz et al., 2015;",
"ref_id": "BIBREF23"
},
{
"start": 94,
"end": 115,
"text": "Graves et al., 2013b;",
"ref_id": "BIBREF18"
},
{
"start": 116,
"end": 133,
"text": "Cho et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 627,
"end": 645,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 648,
"end": 656,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "LSTM Models",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i t = \u03c3(W ix x t + W ih h t\u22121 + W ic c t\u22121 + b i ) (4) f t = 1 \u2212 i t (5) o t = \u03c3(W ox x t + W oh h t\u22121 + W oc c t + b o )",
"eq_num": "(6)"
}
],
"section": "LSTM Models",
"sec_num": "5.1"
},
{
"text": "Cell:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM Models",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c t = f t \u2297 c t\u22121 + i t \u2297 tanh(W cx x t + W ch h t\u22121 + b c in )",
"eq_num": "(7)"
}
],
"section": "LSTM Models",
"sec_num": "5.1"
},
{
"text": "Hidden State:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM Models",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = o t \u2297 tanh(c t )",
"eq_num": "(8)"
}
],
"section": "LSTM Models",
"sec_num": "5.1"
},
{
"text": "The W terms in Equations 4-7 are the weight matrices (W ic and W oc are diagonal weight matrices for peephole connections); the b terms denote bias vectors; \u03c3 is the logistic sigmoid function; and \u2297 computes element-wise multiplication of two vectors. i t , f t and o t are input, forget and output gates, respectively; c t stores the cell state, and h t is the output of the current memory block. Inputs For the inputs x 1 , x 2 , ..., x n , we learn 50-dimension word representations using the skipgram algorithm (Mikolov et al., 2013) . The training corpus was collected from the tweet archive site, and a total of 604,926,764 tweets were used. Each tweet was tokenized using a tweet-adapted tokenizer (Owoputi et al., 2013) , and stopwords and punctuations are removed. The trained model contains 5,251,332 words.",
"cite_spans": [
{
"start": 515,
"end": 537,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF32"
},
{
"start": 705,
"end": 727,
"text": "(Owoputi et al., 2013)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM Models",
"sec_num": "5.1"
},
{
"text": "Layers Recent research has shown that both RNNs and LSTMs can benefit from depth in space (Graves et al., 2013a; Graves et al., 2013b; Sak et al., 2014; Sak et al., 2015) . A deep LSTM is built by stacking multiple LSTM layers, with the output sequence of one layer forming the input sequence for the next, as shown in Figure 2 . At each time step the input goes through multiple non-linear layers, which progressively build up higher level representations from the current level. In our tweet representation model, we embody a deep LSTM architecture with up to 3 layers.",
"cite_spans": [
{
"start": 90,
"end": 112,
"text": "(Graves et al., 2013a;",
"ref_id": "BIBREF17"
},
{
"start": 113,
"end": 134,
"text": "Graves et al., 2013b;",
"ref_id": "BIBREF18"
},
{
"start": 135,
"end": 152,
"text": "Sak et al., 2014;",
"ref_id": "BIBREF43"
},
{
"start": 153,
"end": 170,
"text": "Sak et al., 2015)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [
{
"start": 319,
"end": 327,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "LSTM Models",
"sec_num": "5.1"
},
{
"text": "Given a LSTM and an input sequence x 1 , x 2 , ..., x n , using the last state h n as features is a basic representation strategy for the sequence. Apart from this approach, another common feature extraction strategy is to apply pooling (Boureau et al., 2011 ) over all the states h 1 , h 2 , ..., h n to capture the most characteristic information. Pooling extracts fixed dimensional features from h 1 , h 2 , ..., h n , which has variable length. In our model we consider different pool strategies, including max, average and min poolings. For convenience of writing, we refer to the basic feature strategy also as basic pooling in later sections. When there are multiple LSTM layers, the features consist of the pooling results from each layer, concatenated to give a single vector.",
"cite_spans": [
{
"start": 237,
"end": 258,
"text": "(Boureau et al., 2011",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pooling",
"sec_num": "5.2"
},
{
"text": "Given the pooling methods, we extract features r f and r b for the forward and backward multilayer LSTMs, respectively. Inspired by Socher et al. (2013a), we use a neural tensor network (NTN) to combine the bi-directional r f and r b \u2208 R d . The network can be formalized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Tensor Network for Feature Combination",
"sec_num": "5.3"
},
{
"text": "V = tanh(r T f T [1:q] r b + W ntn r f r b + b ntn ) (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Tensor Network for Feature Combination",
"sec_num": "5.3"
},
{
"text": "where T [1:q] \u2208 R d\u00d7d\u00d7q is a tensor, W ntn \u2208 R q\u00d72d and b ntn \u2208 R q are the weight matrix and bias vector, respectively, as that in the standard form of a neural network. The bilinear tensor product r T f T [1:q] r b is a vector v \u2208 R q , where each entry is computed by one slice of the tensor:",
"cite_spans": [
{
"start": 207,
"end": 212,
"text": "[1:q]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Tensor Network for Feature Combination",
"sec_num": "5.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v i = r T f T [i] r b (i = 1, 2, . . . , q)",
"eq_num": "(10)"
}
],
"section": "Neural Tensor Network for Feature Combination",
"sec_num": "5.3"
},
{
"text": "1. NSA site went down due to 'internal error', not DDoS attack, agency claims http://t.co/B7AzoLPsKf < isn't that the same thing 2. NSA denies DDOS attack took place on website, claims internal error http://t.co/WW7uFM4Xk5 3. @HostingSocial True Shikha,Enterprises are at a greater risk with increased DDoS attacks & #cloud solns need to take measures for prevention Table 3 : The three false positives in the 100 automatically extracted mentions, where EVENT ENTI-TIES are in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 367,
"end": 374,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural Tensor Network for Feature Combination",
"sec_num": "5.3"
},
{
"text": "The NTN combined features are concatenated, and fed into a tanh hidden layer. The output of the layer, f s , becomes the final representation of a tweet, and is used to compute the probability of the tweet being an event mention, as shown in Equation 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Tensor Network for Feature Combination",
"sec_num": "5.3"
},
{
"text": "The final classifier of the neural network model is Equation 1, consistent with the baseline model. As a result, ER is applied in the same way as Equation 2. The main difference between our model and the baseline is in the definition of f s , the former being a deep neural network and the latter being manual features. Consequently, Equation 1 can be regarded as a softmax layer in our deep neural model, for which all the features and parameters are trained automatically and consistently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "5.4"
},
{
"text": "For training, the parameters are initialized uniformly within the interval [\u2212a,a], where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "5.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a = 6 H k + H k+1",
"eq_num": "(11)"
}
],
"section": "Classification",
"sec_num": "5.4"
},
{
"text": "H k and H k+1 are the numbers of rows and columns of the parameter, respectively (Glorot and Bengio, 2010) . The parameters are learned using stochastic gradient descent with momentum (Rumelhart et al., 1988) . The model is trained by 500 iterations, in each of which unlabeled instances are randomly sampled so that the same numbers of positives and unlabeled data are used.",
"cite_spans": [
{
"start": 81,
"end": 106,
"text": "(Glorot and Bengio, 2010)",
"ref_id": "BIBREF16"
},
{
"start": 184,
"end": 208,
"text": "(Rumelhart et al., 1988)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "5.4"
},
{
"text": "6 Experiments and Results",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "5.4"
},
{
"text": "We streamed tweet with the track kayword ddos for five months from April 13 to September 13, 2015. In addition, we extracted tweets containing the word ddos from a tweet archive 5 in the period from September 2011 to September 2014. Using the distant seed event extraction scheme described in Section 4, a total number of 930 mentions covering 45 ENTITY were automatically derived. In order to examine whether the automatically-collected instances are true positives and hence form a useful training set, an author of this paper annotated 100 extracted mentions finding that that 3 are false positives, as listed in Table 3 . The result suggests that the automatically extracted mentions are reliable. The remaining tweets were randomly split into a 200-instance development set, a 800-instance test set, and an unlabeled training set. 6 Both the development and test sets were annotated by a human judge and an author of this paper. The inter-annotator agreement on the binary labeled 1000 instances was measured by using Fleiss' kappa (Fleiss et al., 2013) , and the score, which is 0.85 for the data, represents almost perfect agreement according to Landis and Koch (1977) . There were 47 out of the 1,000 tweets that received different labels, for which another human judge made the final decision.",
"cite_spans": [
{
"start": 836,
"end": 837,
"text": "6",
"ref_id": null
},
{
"start": 1037,
"end": 1058,
"text": "(Fleiss et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 1153,
"end": 1175,
"text": "Landis and Koch (1977)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 616,
"end": 623,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "6.1"
},
{
"text": "To test the applicability of the proposed mention extraction system on other domains, we collected 400 sentences containing the keyword ddos from dark web. Again each sentence was annotated by two human judges, and the third person made the final decision on conflicting cases. The interannotator agreement kappa score on this dataset is 0.85, consistent with the tweet annotation. Table 4 presents the statistics of the datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 382,
"end": 389,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "6.1"
},
{
"text": "We follow Ritter et al. (2015) and evaluate the performance by the area under the precision-recall curve (AUC), where precision is the fraction of retrieved instances that are event mentions, and re- call is the fraction of gold event mention instances that are retrieved. Precision-recall (PR) curves offer informative pictures on the classification of unbalanced classes (Davis and Goadrich, 2006) .",
"cite_spans": [
{
"start": 10,
"end": 30,
"text": "Ritter et al. (2015)",
"ref_id": "BIBREF41"
},
{
"start": 373,
"end": 399,
"text": "(Davis and Goadrich, 2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6.2"
},
{
"text": "For the proposed model, we empirically set the LSTM output vector h t , the NTN output V , and the size of the hidden layer to 32. 7 For the ER model, the human-provided label expectation priorp is set to 0.22 since the percentage of positives in the development set is 22%, and the parameter \u03bb U is set to one-tenth of the positive training data. 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development Experiments",
"sec_num": "6.3"
},
{
"text": "We first test whether using a NTN to combine the bi-directional representations can give a better performance compared to simply concatenating the two representation vectors. Table 5 gives AUCs of one-layer basic, max, avg and min pooling strategies tested on the tweet development set. We can see that all the four different pooling strategies perform better when the NTN combination is used. As a result, for the following experiments we only consider using NTNs to combine bi-directional representations.",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 182,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Feature Combination",
"sec_num": "6.3.1"
},
{
"text": "Next we observe the effect of using different numbers of LSTM layers in our model. AUCs of basic, max, avg and min pooling strategies with respect to 1, 2 and 3 LSTM layers are presented in Table 5 . In most of the cases, the performance of the model increases when the LSTM architecture goes deeper, and we build our final models using 3 LSTM layers.",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 197,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Feature Combination",
"sec_num": "6.3.1"
},
{
"text": "In the previous experiments, max pooling achieves the highest AUC with the architecture 3-LSTM-layer+NTN, we are interested in whether combining max with other pooling strategies would further increase the performance. Table 6 summarizes the AUC of various combinations, according to which we choose max+basic for final tests.",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 226,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Pooling Strategies",
"sec_num": "6.3.2"
},
{
"text": "Finally, we test the performance of sparse feature representations as used in the model of Ritter et al. (2015) . Figure 4 shows the PR curves of the sparse representation and the best setting max+basic evaluated on the development set. The AUC of using sparse representation is 0.30 while that of the max+basic model is 0.51. The runtime performances of training with sparse feature representations and neural feature representations are 276.17 and 1137.87 seconds, respectively, running on a single thread of an Intel Core i7-4790 3.60GHz CPU. Figure 5 presents the PR curves of the baseline sparse feature representation and the final neural model evaluated on the datasets, and Table 7 gives the AUC for these test-set evaluations. From the curves we can see that the sparse representation is comparatively less efficient in picking out negative examples, since at a lower recall the model does not gain a higher precision. In contrast, LSTM-based representation demonstrates a better trade-off between recall and precision. We do not have a strong intuition on why the performance on dark web test set is better than that on tweet test set for the proposed model. Discrete Baseline Model (Ritter et al., 2015) LSTM-based Model Top 5: N|0.9|0.0 They dealt with the ddos attacks with grace and confidence. P|0.9|1.0 Thank you.And now, this is my hypothesis, only is a personal thinking, my thought of what happening (or something similar, at least): I think that Agora is under DDOS attacks constantly, maybe for another markets (probably Nucleus if I had to bet for one: right now they have the monopoly, practically, it's one of the three and more knowns and used DM's now (Agora, Nucleus, and Middle-Earth, at least this is my thought) all the vendors of Agora are going to Nucleus too and all publishing their listings there. N|0.9|0.8 But it was basically explaining how the DDOS attacks on SR earlier in the year were the NSA triangulating its position by measuring PING return times and likes. N|0.9|0.1 unforgiven I remember from sr2, many of the sr2 fanboys were all for DDOS attacks on Agora and tormarket if people remember. N|0.8|0.2 you know things be stressful for admins and dev team right now :/keep your heads up guys, the work you do is the front line of our revolution for personal freedoms being regained.everyone here is a freedom fighter, you guys are our captainsthank you ALL for this wonderful community and sense of freedom you have brought us!so get this DDoS attack under control and keep on truckin!!!",
"cite_spans": [
{
"start": 91,
"end": 111,
"text": "Ritter et al. (2015)",
"ref_id": "BIBREF41"
},
{
"start": 1193,
"end": 1214,
"text": "(Ritter et al., 2015)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 114,
"end": 122,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 546,
"end": 554,
"text": "Figure 5",
"ref_id": "FIGREF5"
},
{
"start": 682,
"end": 689,
"text": "Table 7",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Pooling Strategies",
"sec_num": "6.3.2"
},
{
"text": "Top 5: P|0.7|1.0 Until we have proof I don't think we can say who is responsiblemaybe it wasnt tor market who did the ddos, but check this out:http://silkroad5v7dywlc.onion/index.php?topic=8598.0maybe they did initate the ddos in the hopes of proving that their site is superior because they \"fended off\" a ddos attack faster than SRTM is super sketchy! P|0.7|1.0 what's the status?you find it in the first post i set it to GREEN/ORANGE as the site is still under DDOS attack but temporarily accessable.greets P|0.7|1.0 It seems their idea of a \"hack\" is a DDoS attack on the server (which does indeed go on right now, and as all DoS attacks, can result in denial of service) and a brute-force attack on the login system to try to find out users' passwords. P|0.6|1.0 One of the other markets (Nucleus) is paying some blackhat to DDOS most of the other markets, it's all over Reddit.Support here is asleep, I don't know how you can run a market with a daily uptime of 25%.I agree with OP. P|0.7|1.0 He also said he was involved in helping DPR hack into Tormarket's database and launch the DDoS against the Russian cyberattackers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Test",
"sec_num": "6.4"
},
{
"text": "Bottom 5: N|0.5|0.0 I only words I could understand were \"DDoS\" and \"Bastard\". N|0.5|0.9 In general, it seems like they have set the site up to accommodate all parties: escrow, vendor ratings, buyer ratings, quick wallet transactions, etc.Guess we'll see how they deal with the growing pains, DDOS, & hack attempts that will certainly come their way in the near future. N|0.5|0.0 Please ddos him. N|0.5|0.0 Next fucking day, ddos dildos and damage....LEGs wares hit my drop while the market was still floundering like guppies on hot concrete, yeah, that's why. N|0.5|0.2 child pornography, spamm, DDOS etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Test",
"sec_num": "6.4"
},
{
"text": "Bottom 5: N|0.5|0.0 I only words I could understand were \"DDoS\" and \"Bastard\". N|0.9|0.0 They dealt with the ddos attacks with grace and confidence. N|0.5|0.0 DDOS IS PURE BULLSHIT. N|0.5|0.0 can you guys ddos this guy? N|0.5|0.0 The DDoS has nothing to do with this problem. Table 8 : Top 5 and bottom 5 ranked dark web sentences as determined by the baseline and the proposed LSTM-based model. Format: class label|baseline score|neural score. Table 8 shows the top 5 and bottom 5 ranked dark web sentences 9 as scored by the baseline and the proposed LSTM-based model, respectively. For each sentence, the human judgment (P for event mentions and N for non-event mentions) is given, followed by the probability values output by the baseline and the proposed system. Only one of the top five most probable eventmentioning sentences as decided by the baseline is true positive. On the other hand, all of the top five sentences indicated by the proposed model are true positives. We investigate the contextual features that contribute to the false positive case \"They dealt with the ddos attacks with grace and confidence.\" determined by the baseline, and find that the patterns \"DT ddos\", \"ddos attack|NN\", \"DT ddos attack|NN IN\" and \"IN DT ddos\" are ranked 2 nd , 18 th , 111 th , 127 th among the 15,355 contextual patterns, respectively, which have relatively high weights but only carry limited information. In contrast, the LSTM-based model can capture global syntactic and semantic features other than words surrounding ddos to distinguish mentions from non-mentions. From the table we can see that those high-confidence sentences determined by the LSTM-based model are more informative compared with those lower ranked sentences. Figure 6 presents the probability distributions of positive and negative test cases as obtained by the baseline (x-axis) and the LSTM-based model (yaxis), respectively. It can be seen from the figures that the probabilities determined by the LSTMbased model are scattered between 0.0 and 1.0, while those by the baseline are gathered between 0.5 and 0.9, which shows that the proposed neural model can achieve better confidence on classifying event mentions. This demonstrates its stronger differentiating power as compared with discrete indicator features, as hypothesized in the introduction. In addition, for the proposed model a large portion of true positives ( ) are close to the top in both test sets, while more negatives (\u00d7) gather at the bottom of the dark web test set plot, compared to that in the tweet test set. As for the baseline model, many negatives locate around the horizontal centre, with a probability of 0.5, in the tweet test set, which explains why the baseline is relatively less effective on the precision-recall trade-off.",
"cite_spans": [],
"ref_spans": [
{
"start": 276,
"end": 283,
"text": "Table 8",
"ref_id": null
},
{
"start": 445,
"end": 452,
"text": "Table 8",
"ref_id": null
},
{
"start": 1737,
"end": 1745,
"text": "Figure 6",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Final Test",
"sec_num": "6.4"
},
{
"text": "We investigated LSTM-based text representation for event mention extraction, finding that automatic features from the deep neural network largely improve the sparse representation method on the task. The model performance can further benefit by exploiting deep LSTM structures and tensor combination of bi-directional features. Results on tweets and dark web forum posts show the effectiveness of the method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://status.github.com/messages/2013-07-29",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Most of the articles are about DDoS attack events, while a smaller number discusses the nature of DDoS attacks and related issues.3 https://dev.twitter.com/streaming/overview 4 https://github.com/aritter/twitter nlp",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://archive.org/details/twitterstream 6 http://people.sutd.edu.sg/\u02dcyue zhang/pub/naacl16.cyc.zip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The hidden layer size is chosen by comparing development test scores using the sizes of 16, 32 and 64.8Mann and McCallum (2007) found that \u03bb U does not require tuning for different data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The sentence boundary was detected by NLTK PunktSen-tenceTokenizer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Geoffrey Williams for data annotation, Lin Li for data processing, and anonymous reviewers for their informative comments. Yue Zhang is the corresponding author.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning deep architectures for AI. Foundations and trends R in Machine Learning",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "2",
"issue": "",
"pages": "1--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio. 2009. Learning deep architectures for AI. Foundations and trends R in Machine Learning, 2(1):1-127.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Event discovery in social media feeds",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Benson",
"suffix": ""
},
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "389--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Benson, Aria Haghighi, and Regina Barzilay. 2011. Event discovery in social media feeds. In Pro- ceedings of the Annual Meeting of the ACL, pages 389-398, Portland, Oregon.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Ask the locals: multiway local pooling for image recognition",
"authors": [
{
"first": "Y-Lan",
"middle": [],
"last": "Boureau",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [
"Le"
],
"last": "Roux",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Ponce",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2011,
"venue": "ICCV, IEEE International Conference on",
"volume": "",
"issue": "",
"pages": "2651--2658",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y-Lan Boureau, Nicolas Le Roux, Francis Bach, Jean Ponce, and Yann LeCun. 2011. Ask the locals: multi- way local pooling for image recognition. In ICCV, IEEE International Conference on, pages 2651-2658.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on EMNLP",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the Conference on EMNLP, pages 740-750.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Event detection via communication pattern analysis",
"authors": [
{
"first": "Flavio",
"middle": [],
"last": "Chierichetti",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Kleinberg",
"suffix": ""
},
{
"first": "Ravi",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2014,
"venue": "International AAAI Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Flavio Chierichetti, Jon Kleinberg, Ravi Kumar, Moham- mad Mahdian, and Sandeep Pandey. 2014. Event de- tection via communication pattern analysis. In Inter- national AAAI Conference on Weblogs and Social Me- dia.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Alar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on EMNLP",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, \u00c7 alar G\u00fcl\u00e7ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase represen- tations using RNN encoder-decoder for statistical ma- chine translation. In Proceedings of the Conference on EMNLP, pages 1724-1734, Doha, Qatar.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th ICML",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified ar- chitecture for natural language processing: Deep neu- ral networks with multitask learning. In Proceedings of the 25th ICML, pages 160-167.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493- 2537.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Probabilistic frame-semantic parsing",
"authors": [
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Desai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The Annual Conference of NAACL",
"volume": "",
"issue": "",
"pages": "948--956",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dipanjan Das, Nathan Schneider, Desai Chen, and Noah A. Smith. 2010. Probabilistic frame-semantic parsing. In Human Language Technologies: The An- nual Conference of NAACL, pages 948-956, Los An- geles, California.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The relationship between precision-recall and ROC curves",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Goadrich",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd ICML",
"volume": "",
"issue": "",
"pages": "233--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Davis and Mark Goadrich. 2006. The relationship between precision-recall and ROC curves. In Proceed- ings of the 23rd ICML, pages 233-240, Pittsburgh, Pennsylvania.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Finding bursty topics from microblogs",
"authors": [
{
"first": "Qiming",
"middle": [],
"last": "Diao",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Feida",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ee-Peng",
"middle": [],
"last": "Lim",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "536--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiming Diao, Jing Jiang, Feida Zhu, and Ee-Peng Lim. 2012. Finding bursty topics from microblogs. In Pro- ceedings of the Annual Meeting of the ACL, pages 536-544, Jeju Island, South Korea.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Question answering over freebase with multi-column convolutional neural networks",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Annual Meeting of the ACL and the 7th International Joint Conference on NLP",
"volume": "",
"issue": "",
"pages": "260--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Furu Wei, Ming Zhou, and Ke Xu. 2015. Question answering over freebase with multi-column convolutional neural networks. In Proceedings of the Annual Meeting of the ACL and the 7th International Joint Conference on NLP, pages 260-269.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Transition-based dependency parsing with stack long short-term memory",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Annual Meeting of the ACL and the 7th International Joint Conference on NLP",
"volume": "",
"issue": "",
"pages": "334--343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A Smith. 2015. Transition-based dependency parsing with stack long short-term mem- ory. In Proceedings of the Annual Meeting of the ACL and the 7th International Joint Conference on NLP, pages 334-343.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Statistical methods for rates and proportions",
"authors": [
{
"first": "L",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Fleiss",
"suffix": ""
},
{
"first": "Myunghee Cho",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paik",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph L Fleiss, Bruce Levin, and Myunghee Cho Paik. 2013. Statistical methods for rates and proportions. Wiley-Interscience, 3 edition.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Recurrent nets that time and count",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Gers",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the IEEE-INNS-ENNS International Joint Conference on",
"volume": "",
"issue": "",
"pages": "189--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Gers and J\u00fcrgen Schmidhuber. 2000. Recurrent nets that time and count. In Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on, pages 189-194.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning to forget: Continual prediction with LSTM",
"authors": [
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Felix A Gers",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cummins",
"suffix": ""
}
],
"year": 2000,
"venue": "Neural computation",
"volume": "12",
"issue": "10",
"pages": "2451--2471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix A Gers, J\u00fcrgen Schmidhuber, and Fred Cummins. 2000. Learning to forget: Continual prediction with LSTM. Neural computation, 12(10):2451-2471.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Understanding the difficulty of training deep feedforward neural networks",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "International conference on artificial intelligence and statistics",
"volume": "",
"issue": "",
"pages": "249--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understand- ing the difficulty of training deep feedforward neural networks. In International conference on artificial in- telligence and statistics, pages 249-256.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hybrid speech recognition with deep bidirectional LSTM",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Abdel-Rahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
}
],
"year": 2013,
"venue": "ASRU, 2013 IEEE Workshop on",
"volume": "",
"issue": "",
"pages": "273--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Graves, Navdeep Jaitly, and Abdel-rahman Mo- hamed. 2013a. Hybrid speech recognition with deep bidirectional LSTM. In ASRU, 2013 IEEE Workshop on, pages 273-278.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Speech recognition with deep recurrent neural networks",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Abdel-Rahman",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2013,
"venue": "ICASSP, 2013 IEEE International Conference on",
"volume": "",
"issue": "",
"pages": "6645--6649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013b. Speech recognition with deep recur- rent neural networks. In ICASSP, 2013 IEEE Interna- tional Conference on, pages 6645-6649.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Linking tweets to news: A framework to enrich short text data in social media",
"authors": [
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Mona",
"middle": [
"T"
],
"last": "Diab",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "239--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiwei Guo, Hao Li, Heng Ji, and Mona T Diab. 2013. Linking tweets to news: A framework to enrich short text data in social media. In Proceedings of the Annual Meeting of the ACL, pages 239-249.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Simple customization of recursive neural networks for semantic relation classification",
"authors": [
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Chikayama",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1372--1376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazuma Hashimoto, Makoto Miwa, Yoshimasa Tsu- ruoka, and Takashi Chikayama. 2013. Simple cus- tomization of recursive neural networks for semantic relation classification. In EMNLP, pages 1372-1376.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A neural network for factoid question answering over paragraphs",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Leonardo",
"middle": [],
"last": "Claudino",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on EMNLP",
"volume": "",
"issue": "",
"pages": "633--644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum\u00e9 III. 2014. A neu- ral network for factoid question answering over para- graphs. In Proceedings of the Conference on EMNLP, pages 633-644.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "An empirical exploration of recurrent network architectures",
"authors": [
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd ICML.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A convolutional neural network for modelling sentences",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for mod- elling sentences. In Proceedings of the 52nd Annual Meeting of the ACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on EMNLP",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sen- tence classification. In Proceedings of the Conference on EMNLP, pages 1746-1751, Doha, Qatar.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The measurement of observer agreement for categorical data",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Landis",
"suffix": ""
},
{
"first": "Gary G",
"middle": [],
"last": "Koch",
"suffix": ""
}
],
"year": 1977,
"venue": "biometrics",
"volume": "",
"issue": "",
"pages": "159--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Richard Landis and Gary G Koch. 1977. The mea- surement of observer agreement for categorical data. biometrics, pages 159-174.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Early stage influenza detection from Twitter",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1309.7340"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Claire Cardie. 2013. Early stage influenza detection from Twitter. arXiv preprint arXiv:1309.7340.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Major life event extraction from twitter based on congratulations/condolences speech acts",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Conference on EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Alan Ritter, Claire Cardie, and Eduard Hovy. 2014. Major life event extraction from twitter based on congratulations/condolences speech acts. In Pro- ceedings of the Conference on EMNLP, pages 1997- 2007, Doha, Qatar.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Social event extraction: Task, challenges and techniques",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE/ACM International Conference on ASONAM",
"volume": "",
"issue": "",
"pages": "526--532",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Li, Heng Ji, and Lin Zhao. 2015. Social event extraction: Task, challenges and techniques. In Pro- ceedings of the IEEE/ACM International Conference on ASONAM, pages 526-532.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A dependency-based neural network for relation classification",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Annual Meeting of the ACL and the 7th International Joint Conference on NLP",
"volume": "",
"issue": "",
"pages": "285--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng Wang. 2015. A dependency-based neural network for relation classification. In Proceedings of the Annual Meeting of the ACL and the 7th Interna- tional Joint Conference on NLP, pages 285-290, Bei- jing, China.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Simple, robust, scalable semi-supervised learning via expectation regularization",
"authors": [
{
"first": "S",
"middle": [],
"last": "Gideon",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 24th ICML",
"volume": "",
"issue": "",
"pages": "593--600",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon S Mann and Andrew McCallum. 2007. Simple, robust, scalable semi-supervised learning via expecta- tion regularization. In Proceedings of the 24th ICML, pages 593-600.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Workshop at ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In Proceedings of Workshop at ICLR.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Safety information mining -what can NLP do in a disaster",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Hagiwara",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Murakami",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 5th International Joint Conference on NLP",
"volume": "",
"issue": "",
"pages": "965--973",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Masato Hagiwara, and Koji Murakami. 2011. Safety information mining -what can NLP do in a disaster -. In Proceedings of the 5th Interna- tional Joint Conference on NLP, pages 965-973.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Improved part-of-speech tagging for online conversational text with word clusters",
"authors": [
{
"first": "Olutobi",
"middle": [],
"last": "Owoputi",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversa- tional text with word clusters. In Proceedings of the Conference of NAACL.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Learning state space trajectories in recurrent neural networks",
"authors": [
{
"first": "A",
"middle": [],
"last": "Barak",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pearlmutter",
"suffix": ""
}
],
"year": 1989,
"venue": "Neural Computation",
"volume": "1",
"issue": "2",
"pages": "263--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barak A Pearlmutter. 1989. Learning state space trajec- tories in recurrent neural networks. Neural Computa- tion, 1(2):263-269.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on EMNLP",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on EMNLP, pages 1532-1543.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Streaming first story detection with application to twitter",
"authors": [
{
"first": "Sa\u0161a",
"middle": [],
"last": "Petrovi\u0107",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lavrenko",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The Annual Conference of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sa\u0161a Petrovi\u0107, Miles Osborne, and Victor Lavrenko. 2010. Streaming first story detection with application to twitter. In Human Language Technologies: The An- nual Conference of NAACL.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "ZORE: A syntax-based system for Chinese open relation extraction",
"authors": [
{
"first": "Likun",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on EMNLP",
"volume": "",
"issue": "",
"pages": "1870--1880",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Likun Qiu and Yue Zhang. 2014. ZORE: A syntax-based system for Chinese open relation extraction. In Pro- ceedings of the Conference on EMNLP, pages 1870- 1880, Doha, Qatar.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Named entity recognition in tweets: An experimental study",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on EMNLP",
"volume": "",
"issue": "",
"pages": "1524--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: An exper- imental study. In Proceedings of the Conference on EMNLP, pages 1524-1534.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Open domain event extraction from twitter",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Mausam",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ACM SIGKDD",
"volume": "",
"issue": "",
"pages": "1104--1112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Mausam, Oren Etzioni, and Sam Clark. 2012. Open domain event extraction from twitter. In Proceedings of ACM SIGKDD, pages 1104-1112.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Weakly supervised extraction of computer security events from twitter",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Wright",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Casey",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "896--905",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Evan Wright, William Casey, and Tom Mitchell. 2015. Weakly supervised extraction of com- puter security events from twitter. In Proceedings of the 24th International Conference on World Wide Web, pages 896-905.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Learning representations by backpropagating errors",
"authors": [
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "David E Rumelhart",
"suffix": ""
},
{
"first": "Ronald J",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 1988,
"venue": "Cognitive modeling",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1988. Learning representations by back- propagating errors. Cognitive modeling, 5:3.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Long short-term memory recurrent neural network architectures for large scale acoustic modeling",
"authors": [
{
"first": "Hasim",
"middle": [],
"last": "Sak",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Senior",
"suffix": ""
},
{
"first": "Fran\u00e7oise",
"middle": [],
"last": "Beaufays",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Annual Conference of INTER-SPEECH",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hasim Sak, Andrew Senior, and Fran\u00e7oise Beaufays. 2014. Long short-term memory recurrent neural net- work architectures for large scale acoustic modeling. In Proceedings of the Annual Conference of INTER- SPEECH.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Learning acoustic frame labeling for speech recognition with recurrent neural networks",
"authors": [
{
"first": "Hasim",
"middle": [],
"last": "Sak",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Senior",
"suffix": ""
},
{
"first": "Kanishka",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Ozan",
"middle": [],
"last": "Irsoy",
"suffix": ""
}
],
"year": 2015,
"venue": "ICASSP, 2015 IEEE International Conference on",
"volume": "",
"issue": "",
"pages": "4280--4284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hasim Sak, Andrew Senior, Kanishka Rao, Ozan Irsoy, Alex Graves, Fran\u00e7oise Beaufays, and Johan Schalk- wyk. 2015. Learning acoustic frame labeling for speech recognition with recurrent neural networks. In ICASSP, 2015 IEEE International Conference on, pages 4280-4284.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Earthquake shakes twitter users: real-time event detection by social sensors",
"authors": [
{
"first": "Takeshi",
"middle": [],
"last": "Sakaki",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Matsuo",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the international conference on WWW",
"volume": "",
"issue": "",
"pages": "851--860",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes twitter users: real-time event detection by social sensors. In Proceedings of the international conference on WWW, pages 851-860. ACM.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Reasoning with neural tensor networks for knowledge base completion",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "926--934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013a. Reasoning with neural tensor networks for knowledge base completion. In NIPS, pages 926-934.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the conference on EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on EMNLP.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Deep learning for event-driven stock prediction",
"authors": [
{
"first": "Duy-Tin",
"middle": [],
"last": "Vo",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of IJ-CAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duy-Tin Vo and Yue Zhang. 2015. Deep learning for event-driven stock prediction. In Proceedings of IJ- CAI, BueNos Aires, Argentina, August.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "A neural probabilistic structured-prediction model for transition-based dependency parsing",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "1213--1222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhou, Yue Zhang, and Jiajun Chen. 2015. A neural probabilistic structured-prediction model for transition-based dependency parsing. In Proceedings of the Annual Meeting of the ACL, pages 1213-1222.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Visualization of the numbers of tweets mentioning Ulster bank (on the left) and Essex (on the right) around the news publication dates.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Architecture of the proposed neural tweet representation model.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "LSTM-based text embedding for word vectors x 1 , x 2 , . . . , x n . semantic representation, each LSTM model can include multiple stacked layers. Neural pooling (Section 5.",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "Development PR curves.",
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"num": null,
"text": "(a) Tweet test set.(b) Dark web test set.",
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"num": null,
"text": "Final PR curves.",
"type_str": "figure"
},
"FIGREF6": {
"uris": null,
"num": null,
"text": "(a) Tweet test set. (b) Dark web test set.",
"type_str": "figure"
},
"FIGREF7": {
"uris": null,
"num": null,
"text": "Probability distributions on the test sets.",
"type_str": "figure"
},
"TABREF1": {
"html": null,
"num": null,
"text": "Features of a tweet by",
"content": "<table/>",
"type_str": "table"
},
"TABREF2": {
"html": null,
"num": null,
"text": "News Title DDoS Attacks Take Down RBS, Ulster Bank, and Natwest Online Systems Date 2015 August 02 Sentences But as can be seen from the attacks against RBS, NatWest, and Ulster Bank, and the warnings from GovCERT.ch and the FBI, these attacks are coming back into vogue again.",
"content": "<table><tr><td colspan=\"2\">News Title Bored Brazilian skiddie claims DDoS</td></tr><tr><td/><td>against Essex Police</td></tr><tr><td>Date</td><td>2015 September 04</td></tr><tr><td>Sentences</td><td>A teenager from Brazil has claimed respon-</td></tr><tr><td/><td>sibility for a distributed denial of service</td></tr><tr><td/><td>(DDoS) attack on Essex Police's website,</td></tr><tr><td/><td>following a similar attack on another force</td></tr><tr><td/><td>earlier this week.</td></tr><tr><td/><td>They added: \"Officers investigating the sus-</td></tr><tr><td/><td>pected denial of service attack on the Essex</td></tr><tr><td/><td>Police website ... are liaising with other law</td></tr><tr><td/><td>enforcement agencies to identify any inves-</td></tr><tr><td/><td>tigative leads\"</td></tr></table>",
"type_str": "table"
},
"TABREF3": {
"html": null,
"num": null,
"text": "Example news sentences where victim roles are in italic and ENTITY is in bold.",
"content": "<table/>",
"type_str": "table"
},
"TABREF5": {
"html": null,
"num": null,
"text": "Statistics of the datasets.",
"content": "<table/>",
"type_str": "table"
},
"TABREF7": {
"html": null,
"num": null,
"text": "AUCs of different model architectures.",
"content": "<table/>",
"type_str": "table"
},
"TABREF9": {
"html": null,
"num": null,
"text": "AUCs of different pooling methods.",
"content": "<table/>",
"type_str": "table"
},
"TABREF11": {
"html": null,
"num": null,
"text": "Final AUCs.",
"content": "<table/>",
"type_str": "table"
}
}
}
}