|
{ |
|
"paper_id": "D12-1005", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:23:16.887945Z" |
|
}, |
|
"title": "Streaming Analysis of Discourse Participants", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Inferring attributes of discourse participants has been treated as a batch-processing task: data such as all tweets from a given author are gathered in bulk, processed, analyzed for a particular feature, then reported as a result of academic interest. Given the sources and scale of material used in these efforts, along with potential use cases of such analytic tools, discourse analysis should be reconsidered as a streaming challenge. We show that under certain common formulations, the batchprocessing analytic framework can be decomposed into a sequential series of updates, using as an example the task of gender classification. Once in a streaming framework, and motivated by large data sets generated by social media services, we present novel results in approximate counting, showing its applicability to space efficient streaming classification.", |
|
"pdf_parse": { |
|
"paper_id": "D12-1005", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Inferring attributes of discourse participants has been treated as a batch-processing task: data such as all tweets from a given author are gathered in bulk, processed, analyzed for a particular feature, then reported as a result of academic interest. Given the sources and scale of material used in these efforts, along with potential use cases of such analytic tools, discourse analysis should be reconsidered as a streaming challenge. We show that under certain common formulations, the batchprocessing analytic framework can be decomposed into a sequential series of updates, using as an example the task of gender classification. Once in a streaming framework, and motivated by large data sets generated by social media services, we present novel results in approximate counting, showing its applicability to space efficient streaming classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The rapid growth in social media has led to an equally rapid growth in the desire to mine it for useful information: the content of public discussions, such as found in tweets, or in posts to online forums, can support a variety of data mining tasks. Inferring the underlying properties of those that engage with these platforms, the discourse participants, has become an active topic of research: predicting individual attributes such as age, gender, and political preferences (Rao et al., 2010) , or relationships between communicants, such as organizational dominance (Diehl et al., 2007) . This research can benefit areas such as: (A) commercial applications, e.g., improved models for advertising placement, or detecting fraudulent or otherwise unhelpful product reviews (Jindal and Liu, 2008; Ott et al., 2011) ; and (B) in enhanced models of civic discourse, e.g., inexpensive, large-scale, passive polling of popular opinion (O'Connor et al., 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 478, |
|
"end": 496, |
|
"text": "(Rao et al., 2010)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 571, |
|
"end": 591, |
|
"text": "(Diehl et al., 2007)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 776, |
|
"end": 798, |
|
"text": "(Jindal and Liu, 2008;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 799, |
|
"end": 816, |
|
"text": "Ott et al., 2011)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 933, |
|
"end": 956, |
|
"text": "(O'Connor et al., 2010)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Classification with streaming data has usually been taken in the computational linguistics community to mean individual decisions made on items that are presented over time. For example: assigning a label to each newly posted product review as to whether it contains positive or negative sentiment, or whether the latest tweet signals a novel topic that should be tagged for tracking (Petrovic et al., 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 384, |
|
"end": 407, |
|
"text": "(Petrovic et al., 2010)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Here we consider a distinct form of stream-based classification: we wish to assign, then dynamically update, labels on discourse participants based on their associated streaming communications. For instance, rather than classifying individual reviews as to their sentiment polarity, we might wish to classify the underlying author as to whether they are genuine or paid-advertising, and then update that decision as they continue to post new reviews. As the scale of social media continues to grow, we desire that our model be aggressively space efficient, which precludes a naive solution of storing the full communication history for all users.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we make two contributions: (1) we make explicit that a standard bag-of-words classification model for predicting latent author attributes can be simply decomposed into a series of streaming updates; then (2) show how the randomized algorithm, Reservoir Counting (Van Durme and Lall, 2011), can be extended to maintain approximate av-erages, allowing for significant space savings in our classification model. Our running example task is gender prediction, based on spoken communication and microblogs/Twitter feeds.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Assume that each discourse participant (e.g., speaker, author) a has an associated stream of communications (e.g., tweets, utterances, emails, etc.) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 148, |
|
"text": "communications (e.g., tweets, utterances, emails, etc.)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(c i ) = C. Then let C t = (c 1 , ..., c t ) represent the first t elements of C.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Assume access to a pretrained classifier \u03a6: 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u03a6(a) = 1 if w \u2022 f (C) \u2265 0,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "0 otherwise, which we initially take to be linear: author labels are determined by computing the sign of the dot product between a weight vector w, and feature vector f (C), each of dimension d. Note that f (C) is a feature vector over the entire set of communications from a given author.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For example, \u03a6 might be trained to classify author gender:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Gender(a) = Male if w \u2022 f (C) \u2265 0, Female otherwise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We now make explicit how under certain common restrictions on the feature space, the classification decision can be decomposed into a series of decision updates over the elements of C.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Definef (c i ) to be the vector containing the local, count-based feature values of communication c j . 2 For convenience let us assume thatf (c i ) \u2208 N d . Where |v| 1 = i |v i | is the L1-norm of vector v, let z t be the normalizing constant at t:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "z t = t i=1 |f (c i )| 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Now define f j (C), the j-th entry of f (C), as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "f j (C) = n i=1f j (c i ) z n", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Thus f (C) represents the global relative frequency of each local, count-based feature. This allows us to rearrange terms:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "w \u2022 f (C) = d j=1 w j f j (C) = 1 z n d j=1 w k ( n i=1f j (c i )) = 1 z n n i=1 ( d j=1 w kfj (c i ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Let (s t , z t ) be the current state of the classifier:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(s t , z t ) . = ( t i=1 d k=1 w kfk (c j ), z t )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "which pairs the observed rolling sum, s t with the feature stream length z t . The classifier decision after seeing everything up to and including communication c t is thus a simple average:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u03a6 t (a) = 1 if st zt \u2265 0, 0 otherwise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Finally we reach the observation that:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "s t = s t\u22121 + w \u2022f (c t ) z t = z t\u22121 + |f (c t )| 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "which means that from an engineering standpoint we can process a stream of communication one element at a time, without the need to preserve the history explicitly. That is: for each author, for each attribute being analyzed, an online system only need maintain a state pair (s t , z t ) by extracting and weighting features locally for each new communication. Beyond the computational savings of not needing to store communications nor explicit feature vectors in memory, there are potential privacy benefits as well: analytic systems need not have a lasting record of discourse, they can instead glean whatever signal is required locally in the stream, and then discard the actual communications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Log-linear Rather than a strictly linear \u03a6, such as instantiated via perceptron or SVM with linear kernel, many prefer log-linear models as their classification framework:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u03a6(a) = 1 if 1 1+exp(\u2212w\u2022f (C)) \u2265 0.5, 0 otherwise. ... ... c 1 c 2 c t 1 c t 50% 50% 90% 10% 5% 95% 85% 15%", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "86% 14% Figure 1 : A streaming analytic model should update its decision with each new communication, becoming more stable in its prediction as evidence is acquired.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 16, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In either setting, the state of the classifier is sufficiently captured by the pair (s t , z t ), under the restrictions on f . 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As an example of a model decomposed into a stream, we revist the task of gender classification based on speech transcripts, as explored by Boulis and Ostendorf (2005) and later Garera and Yarowsky (2009) . In the original problem definition, one would collect all transcribed utterances from a given speaker in a corpus such as Fisher (Cieri et al., 2004) or Switchboard (Godfrey et al., 1992) , known as a side of the conversation. Then by collapsing these utterances into a single document, one could classify it as to whether it was generated by a male or female. Here we define the task as: starting from scratch, report the classifier probability of the speaker being male, as each utterance is presented.", |
|
"cite_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 166, |
|
"text": "Boulis and Ostendorf (2005)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 177, |
|
"end": 203, |
|
"text": "Garera and Yarowsky (2009)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 355, |
|
"text": "(Cieri et al., 2004)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 371, |
|
"end": 393, |
|
"text": "(Godfrey et al., 1992)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Validation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Intuitively we would expect that as more utterances are observed, the better our classification accuracy. Researchers such as Burger et al. (2011) have considered this point, but by comparing the classification accuracy based on the volume of batch data available per author (in that case, tweets): the more prolific the author had been, the better able they were to correctly classify their gender. We confirm here this can be reframed: as a speaker (author) continues to emit a stream of communication, a dynamic model tends to improve its online prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 146, |
|
"text": "Burger et al. (2011)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Validation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Our collection based on Switchboard consisted of 520 unique speakers (240 female, 280 male), with a total of roughly 400k utterances. Similar to Boulis and Ostendorf, we extracted unigram and bigram counts as features, but without further TFIDF reweighting. Ngrams were required to occur at least 10 times in the training set, recomputed for each split of 10-fold cross validation. Weights were computed under a log-linear model using LibLinear (Fan et al., 2008) , with 5% of training held out for tuning an L2 regularizing term. Feature extraction and dynamic aspects were handled through additions to the Jerboa package (Van Durme, 2012). Similar to previous work, we found intuitive features such as my husband to be weighted heavily (see Table 1 ), along with certain non-lexical vocalizations such as transcribed laughter. As seen in Figure 2 , accuracy indeed improves as more content is emitted. Figure 3 highlights the streaming perspective: individual speakers can be viewed as distinct trajectories through [0, 1], based on features of their utterances.", |
|
"cite_spans": [ |
|
{ |
|
"start": 435, |
|
"end": 463, |
|
"text": "LibLinear (Fan et al., 2008)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 743, |
|
"end": 750, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 840, |
|
"end": 848, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 904, |
|
"end": 912, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Validation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Now situated within a streaming context we exact space savings through approximation, extending the approach of Van Durme and Lall (2011), there concerned with online Locality Sensitive Hashing, here initially concerned with taking averages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Randomized Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "When calculating the average over a sequence of values, X n = (x 1 , ..., x n ), we divide the sum of the sequence, sum(", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Randomized Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "X n ) = n i=1 x i , by its length, length(X n ) = |X n |: avg(X n ) = sum(Xn)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Randomized Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "length(Xn) Our goal in this section is to maintain a space efficient approximation of avg(X t ), as t increases, by using a bit-saving approximation of both the sum, and the length of the sequence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Randomized Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We begin by reviewing the method of Reservoir Counting, then extend it to a new notion we refer to as Reservoir Averaging. This will allow in the subsequent section to map our analytic model to a form ... ... Twitter deal with a very large number of individuals, each with a variety of implicit attributes (such as gender). This motivates a desire for online space efficiency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Randomized Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "c 1 c 2 c t 1 c t ... ... c 1 c 2 c t 1 c t ... ... c 1 c 2 c t 1 c t ... a 1 a 2 a m", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Randomized Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "explicitly amenable to keeping an online average.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Randomized Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Reservoir Counting plays on the folklore algorithm of reservoir sampling, first described in the literature by Vitter (1985) . As applied to a stream of arbitrary elements, reservoir sampling maintains a list (reservoir) of length k, where the contents of the reservoir represents a uniform random sample over all elements 1...t observed thus far in the stream. When the stream is a sequence of positive and negative integers, reservoir counting implicitly views each value as being unrolled into a sequence made up of either 1 or -1. For instance, the sequence: (3, -2, 1) would be viewed as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 124, |
|
"text": "Vitter (1985)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Counting", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(1, 1, 1, -1, -1, 1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Counting", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Since there are only two distinct values in this stream, the contents of the reservoir can be characterized by knowing the fixed value k, and then s: how many elements in the reservoir are 1. 4 This led to Van Durme and Lall defining a method, ReservoirUpdate, here abbreviated to ResUp, that allows for maintaining an approximate sum, defined as t( 2s k \u2212 1), through updating these two parameters t and s with each newly observed element. Expected accuracy of the approximation varies with the size of the sample, k. Reservoir Counting exploits the fact that the reservoir need only be considered implicitly, where s represented as a b-bit unsigned integer can be used to characterize a reservoir of size k = 2 b \u2212 1. This allowed those authors to show a 50% space reduction in the task of online Locality Sensitive Hashing, at similar levels of accuracy, by replacing explicit 32-bit counting variables with approximate counters of smaller size. See (Van Durme and Lall, 2011) for further details.", |
|
"cite_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 193, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Counting", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For a given integer x, let m = |x| be the magnitude of x, and \u03c3 = sign(x). For a given sequence, let m * be the largest such value of m.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Modifying the earlier implicit construction, consider the sequence (3, -2, 1), with m * = 3, mapped to the sequence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "( 1, 1, 1, 1, 1, 1, -1, -1, -1, -1, -1, 1, 1, 1, 1, 1, -1, -1) where each value x is replaced with m * + m elements of \u03c3, and m * \u2212m elements of \u2212\u03c3. This views x as a sequence of length 2m * , made up of 1s and -1s, where each x in the discrete range [\u2212m * , m * ] has a unique number of 1s.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 2, |
|
"end": 62, |
|
"text": "1, 1, 1, 1, 1, 1, -1, -1, -1, -1, -1, 1, 1, 1, 1, 1, -1, -1)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Now recognize that the average over the original sequence, here 3\u22122+1 3 = 2 3 , is proportional to the average over the implicit sequence,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1+1+...\u22121\u22121 18 = 4 18 = 2 3 ( 1 m * ). Generally for a sequence (x 1 , ..., x n ), with m * as defined, the average times 1 m * is equal to: n i=1 x i n ( 1 m * ) = 1 n2m * n i=1 ( m * +mi l=1 \u03c3 i + m * \u2212mi l=1 \u2212\u03c3 i ) = n i=1 m i \u03c3 nm *", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where n2m * is the total number of 1s and -1s observed in the implicit stream, up to and including the mapping of element x n . If applying Reservoir Counting, s would then record the sampled number of 1s, as per norm, where t maintained as the implicit stream length can also be viewed as storing t = n2m * . At any point in the stream, the average over the original value sequence can then be approximated as: (1) the approximate sum of the implicit stream; divided by (2) the implicit stream length; times (3) m * to cancel the 1 m * term:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(t( 2s k \u2212 1)) 1 ( 1 t ) 2 (m * ) 3 = ( 2s k \u2212 1)m *", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Granularity As defined this scheme operates on streams of integers. We extend the definition to work with a stream of fixed precision floating point variables. Let g be a positive integer that we refer to as the granularity. Modify the mapping of value x from a sequence of length 2m * , to a sequence of length g, comprised of m * +m 2m * g instances of \u03c3, and (1\u2212 m * \u2212m 2m * )g instances of -\u03c3. As seen in line 4 of Algorithm 1, a random coin flip determines placement of the remainder.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For example, the value 1.3, with m * = 3, and g = 10, would now be represented as a sequence of 3+1.3 6 g = 7.16 \u2208 (7, 8) instances of 1, followed by however many instances of -1 that lead to a sequence of length g, after probabilistic rounding. The possible sequences are thus: 1, 1, 1, 1, 1, 1, 1, -1, -1) with the former more likely.", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 101, |
|
"text": "3+1.3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 307, |
|
"text": "1, 1, 1, 1, 1, 1, 1, -1, -1)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(1, 1, 1, 1, 1, 1, 1, -1, -1, -1) (1,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "At this point we have described the framework captured by Algorithm 1, where Van Durme and Lall 2011 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "v := m+m * 2m * g 4: v := v with probability v \u2212 v , v otherwise 5: s := ResUp(ng, k, v, \u03c3, s) 6: s := ResUp((ng + v), k, g \u2212 v, \u2212\u03c3, s ) 7: Return s", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Log-scale Counting For additional space savings we might approximate the length parameter t with a small bit representation, using the approximate counting scheme of Morris (1978) . The method enables counting in log-scale by probabilistically incrementing a counter, where it becomes less and less likely to update the counter after each increment. This scheme is popularly known and used in a variety of contexts, recently in the community by Talbot (2009) Figure 5: Results on averaging randomly generated sequences, with m * = 100, g = 100, and using an 8 bit Morris-style counter of base 2. Larger reservoir sizes lead to better approximation, at higher cost in bits.", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 179, |
|
"text": "Morris (1978)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 445, |
|
"end": 458, |
|
"text": "Talbot (2009)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "to provide a streaming extension to the Bloom-filter based count-storage mechanism of Talbot and Osborne (2007a) and Talbot and Osborne (2007b) . See (Flajolet, 1985) for a detailed analysis of Morrisstyle counting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 143, |
|
"text": "Talbot and Osborne (2007b)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 150, |
|
"end": 166, |
|
"text": "(Flajolet, 1985)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reservoir Averaging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We show through experimentation on synthetic data that this approach gives reasonable levels of accuracy at space efficient sizes of the length and sum parameter. Random sequences of 1,000 values were generated by: (1) fix a value for m * ; (2) draw a polarity bias term \u00b5 uniformly from the range [0,1]; then (3) for each value, x: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Before we close this section, one might ask why this extension is needed in the first place. As Reservoir Counting already allows for keeping an online sum, and pairs it with a length parameter, then this would presumably be what is needed to get the average we are focussed on. Unfortunately that is not the case: the parameter recording the current stream length, here called t, tracks the length of the implicit stream of 1s and -1s, it does not track the length of the original stream of values that gave rise to the mapped version. As an example, consider again the sequence: (3, -2, 1), as compared to: (2,1,-1,-1,1) . Both have the same sum, and would therefore be viewed the same under the pre-existing Reservoir Counting algorithm, giving rise to implicit streams of the same length. But critically the sequences have different averages: 2 3 = 2 5 , which we cannot detect based on the original counting algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 609, |
|
"end": 622, |
|
"text": "(2,1,-1,-1,1)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Justification", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Finally, we restate the constraint: for the sequence to averaged, one must know m * ahead of time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Justification", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Going back to our streaming analysis model, we have a situation that can be viewed as a sequence of values, such that we do know m * . First reinterpret the fraction st zt equivalently as the normalized sum of a stream of elements sampled from w:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application to Classification", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "s t z t = 1 z t t i=1 d j=1f j (c i ) l=1 w j", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application to Classification", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The value m * is then: m * = max j |w j |, over a sequence of length z t . Rather than updating s t and z t through basic addition, we can now use a smaller bit-wise representation for each variable, and update via Reservoir Averaging.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application to Classification", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Reconsidering the earlier classification experiment, we found this approximation method led to terrible results: while our experiments on synthetic data worked well, those sequences were sampled somewhat uniformly over the range of possible values. As seen in Figure 6 , sequences arising from observed feature weights in a practical setting may not be so broadly distributed. In brief: the more the maximum possible update, m * , can be viewed as an outlier, then the more the resulting implicit encoding of g elements per observed weight becomes dominated by \"filler\". As few observed elements will in that case require the provided range, then the implicit representation will be a mostly balanced set of 1 and -1 values. These mostly balanced encodings make it difficult to maintain an adequate approximation of the true average, when reliant on a small, implicit uniform sample. Here we leave further analysis aside, focusing instead on a modified solution for the classification model under consideration.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 268, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Problems in Practice", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Practically we would like to restrict our range to the dense region of weight updates, while at the same time not throwing away or truncating larger weights that appear outside a reduced window. We do this by fitting a replacement to m * : m \u2264 m * , based on the classifier's training data, such that too-large elements will be accommodated into the stream by implicitly assuming that the portion of a value that falls outside the restricted window is \"spread out\" over the previously observed values. That is, we modify the contents of the implicit reservoir by rewriting history: pretending that earlier elements were larger than they were, but still within the reduced window. As long as we don't see too many values that are overly large, then there will be room to accommodate the overflow without any theoretical damage to the implicit stream: all count mass may still be ac-counted for. If a moderately high number of overly large elements are observed, then we expect in practice for this to have a negligible impact on downstream performance. If an exceptional number of elements are overly large, then the training data was not representative of the test set. The newly introduced parameter m is used in MODIFIEDUPDATEAVERAGE (MUA), which relies on REWRITEHISTORY. Note that MUA uses the same value of n when calling REWRITEHISTORY as it does in the subsequent line calling UPDATEAV-ERAGE: we modify the state of the reservoir without incrementing the stream length, taking the current overflow and pretending we saw it earlier, spread out across previous elements. This happens by first estimating the number of 1 values seen thus far in the stream: s k n, then adding in twice the overflow value, which represents removing o instances of \u2212\u03c3 from the stream, and then adding o instances of \u03c3. We probabilistically round the resultant fraction to achieve a modified version of s, which is returned. p := max(0.0, s k \u2212 2o n ) 10: Return pk with prob. pk \u2212 pk , pk otherwise 4.3 Experiment Figure 7 compares the results seen in Figure 2 to a version of the experiment when using approximation. Parameters were: g = 100; k = 255; and a Morris-style counter for stream length using 8 bits and a base of 1.3. The value m was fit independently for each split of 10-fold cross validation, by finding through simple line search that which minimized the number of prediction errors on the original training data (see Figure 8 ). This result shows our ability to replace 2 variables of 32 bits (sum and length) with 2 approximation variables of 8 bits (reservoir status s, and stream length n), leading to a 75% reduction in the cost of maintaining online classifier state, with no significant cost in accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1999, |
|
"end": 2007, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF8" |
|
}, |
|
{ |
|
"start": 2037, |
|
"end": 2045, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 2419, |
|
"end": 2427, |
|
"text": "Figure 8", |
|
"ref_id": "FIGREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Rewriting History", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "5 Real World Stream: Twitter", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rewriting History", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Based on the tweet IDs from the data used by Burger et al. (2011) , we recovered 2,958,107 of their roughly 4 million original tweets. 5 These tweets were then matched against the gender labels established in that prior work. As reported by Burger et al., the dominant language in the collection is English (66.7% reported), followed by Portuguese (14.4%) then Spanish (6.0%), with a large variety of other languages with small numbers of examples.", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 65, |
|
"text": "Burger et al. (2011)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "5 Standard practice in Twitter data exchanges is to share only the unique tweet identifications and then requery the content from Twitter, thus allowing, e.g., the individual authors the ability to delete previous posts and have that reflected in future data collects. While respectful of author privacy, it does pose a challenge for scientific reproducibility. Content was lowercased, then processed by regular expression to collapse the following into respective single symbols: emoticons; URLs; usernames (@mentions); and hashtags. Any content deemed to be a retweet (following the characters RT) was removed. Text was then tokenized according to a modified version of the Penn TreeBank tokenization standard 6 that was less English-centric.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "A log-linear classifier was built using all those authors in the training set 7 with at least 10 tweets. Similar to the previous experiment, unigrams and bigrams features were used exclusively, with the parameter m fit on the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "As seen in Figure 9 , results were as in Switchboard: accuracy improves as more data streams in per author, and our approximate model sacrifices perhaps a point of accuracy in return for a 75% reduction in memory requirements per author. Table 2 gives the top features per gender. We see similarities to Switchboard in terms such as my wife, along with terms suggesting a more youthful population. Foreign terms are recognized by their parenthetical translation and 1st-or 2nd-person + Male/Female gender marking. For example, the Portuguese obrigado can be taken to be literally saying: I'm obliged (thank you), and I'm male.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 19, |
|
"text": "Figure 9", |
|
"ref_id": "FIGREF10" |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 245, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Streaming algorithms have been developed within the applied communities of networking, and (very large) databases, as well as being a popular topic in the theoretical computer science literature. A sum- Within computational linguistics interest in streaming approaches is a more recent development; we provide here examples of representative work, beyond those described in previous sections. Levenberg and Osborne (2009) gave a streaming variant of the earlier perfect hashing language model of Talbot and Brants (2008) , which operated in batch-mode. Using a similar decomposition to that here, Van Durme and Lall (2010) showed that Locality Sensitive Hash (LSH) signatures (Indyk and Motwani, 1998; Charikar, 2002) built using count-based feature vectors can be maintained online, as compared to their earlier uses in the community (Ravichandran et al., 2005; Bhagat and Ravichandran, 2008) . Finally, Goyal et al. (2009) applied the frequent items 8 algorithm of Manku and Motwani (2002) to language modeling.", |
|
"cite_spans": [ |
|
{ |
|
"start": 496, |
|
"end": 520, |
|
"text": "Talbot and Brants (2008)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 676, |
|
"end": 701, |
|
"text": "(Indyk and Motwani, 1998;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 702, |
|
"end": 717, |
|
"text": "Charikar, 2002)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 835, |
|
"end": 862, |
|
"text": "(Ravichandran et al., 2005;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 863, |
|
"end": 893, |
|
"text": "Bhagat and Ravichandran, 2008)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 905, |
|
"end": 924, |
|
"text": "Goyal et al. (2009)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 967, |
|
"end": 991, |
|
"text": "Manku and Motwani (2002)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For further background in predicting author attributes such as gender, see (Garera and Yarowsky, 2009) for an overview of previous work and (nonstreaming) methodology.", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 102, |
|
"text": "(Garera and Yarowsky, 2009)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have taken the predominately batch-oriented process of analyzing communication data and shown it to be fertile territory for research in large-scale streaming algorithms. Using the example task of automatic gender detection, on both spoken transcripts and microblogs, we showed that classification can be thought of as a continuously running process, becoming more robust as further communications become available. Once positioned within a streaming framework, we presented a novel approximation technique for compressing the streaming memory requirements of the classifier (per author) by 75%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "There are a number of avenues to explore based on this framework. For instance, while here we assumed a static, pre-built classifier which was then applied to streaming data, future work may consider the interplay with online learning, based on methods such as by Crammer et al. (2006) . In the appli-cations arena, one might take the savings provided here to run multiple models in parallel, either for more robust predictions (perhaps \"triangulating\" on language ID and/or domain over the stream), or predicting additional properties, such as age, nationality, political orientation, and so forth. Finally, we assumed here strictly count-based features; streaming log-counting methods, tailored Bloom-filters for binary feature storage, and other related topics are assuredly applicable, and should give rise to many interesting new results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 285, |
|
"text": "Crammer et al. (2006)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "While here we assume binary decision tasks, dynamic classification in a multiclass, or regression, setting is an interesting avenue of exploration, for which these definitions generalize.2 As seen later inTable 1, we have in mind features such as the frequency of the n-gram my wife.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that some non-linear kernels can be maintained online in a similar fashion. For instance, a polynomial kernel of degree p decomposes as: (f (Cn) \u2022 w) p = ( sn zn ) p .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As the number of -1 values is simply: k \u2212 s.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Such as codified in http://www.cis.upenn.edu/ treebank/tokenizer.sed 7 The same training, development and test set partitions were used as byBurger et al. (2011), minus those tweets we were unable to retrieve (as previously discussed).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "See the survey byCormode and Hadjieleftheriou (2009) for approaches to the frequent items problem, otherwise known as finding heavy hitters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Acknowledgments I thank the reviewers and my colleagues at Johns Hopkins University for helpful feedback, in particular Matt Post, Mark Dredze, Glen Coppersmith and David Yarowsky. Thanks to David Yarowsky and Theresa Wilson for their assistance in collecting data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Large Scale Acquisition of Paraphrases for Learning Surface Patterns", |
|
"authors": [ |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Bhagat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deepak", |
|
"middle": [], |
|
"last": "Ravichandran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rahul Bhagat and Deepak Ravichandran. 2008. Large Scale Acquisition of Paraphrases for Learning Surface Patterns. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A quantitative analysis of lexical differences between genders in telephone conversations", |
|
"authors": [ |
|
{ |
|
"first": "Constantinos", |
|
"middle": [], |
|
"last": "Boulis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mari", |
|
"middle": [], |
|
"last": "Ostendorf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Constantinos Boulis and Mari Ostendorf. 2005. A quan- titative analysis of lexical differences between genders in telephone conversations. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Discriminating gender on twitter", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Burger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guido", |
|
"middle": [], |
|
"last": "Zarrella", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John D. Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on twitter. In Proceedings of EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Similarity estimation techniques from rounding algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Moses", |
|
"middle": [], |
|
"last": "Charikar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of STOC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Moses Charikar. 2002. Similarity estimation techniques from rounding algorithms. In Proceedings of STOC.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The fisher corpus: a resource for the next generations of speech-to-text", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Cieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Walker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Cieri, David Miller, and Kevin Walker. 2004. The fisher corpus: a resource for the next gen- erations of speech-to-text. In Proceedings of LREC.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Finding the frequent items in streams of data", |
|
"authors": [ |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Cormode", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marios", |
|
"middle": [], |
|
"last": "Hadjieleftheriou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Communications of the ACM", |
|
"volume": "52", |
|
"issue": "10", |
|
"pages": "97--105", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Graham Cormode and Marios Hadjieleftheriou. 2009. Finding the frequent items in streams of data. Com- munications of the ACM, 52(10):97-105.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Shai Shalev-Shwartz, and Yoram Singer", |
|
"authors": [ |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ofer", |
|
"middle": [], |
|
"last": "Dekel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Keshet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "551--585", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev- Shwartz, and Yoram Singer. 2006. Online passive- aggressive algorithms. Journal of Machine Learning Research, 7:551-585.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Relationship identification for social network discovery", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Diehl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Galileo", |
|
"middle": [], |
|
"last": "Namata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lise", |
|
"middle": [], |
|
"last": "Getoor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher P. Diehl, Galileo Namata, and Lise Getoor. 2007. Relationship identification for social network discovery. In Proceedings of AAAI.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Liblinear: A library for large linear classification", |
|
"authors": [ |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Rong-En Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cho-Jui", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang-Rui", |
|
"middle": [], |
|
"last": "Hsief", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chih-Jen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "", |
|
"issue": "9", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsief, Xiang- Rui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. Journal of Ma- chine Learning Research, (9).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Approximate counting: a detailed analysis", |
|
"authors": [ |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Flajolet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "BIT", |
|
"volume": "25", |
|
"issue": "1", |
|
"pages": "113--134", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philippe Flajolet. 1985. Approximate counting: a de- tailed analysis. BIT, 25(1):113-134.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Modeling latent biographic attributes in conversational genres", |
|
"authors": [ |
|
{ |
|
"first": "Nikesh", |
|
"middle": [], |
|
"last": "Garera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikesh Garera and David Yarowsky. 2009. Modeling la- tent biographic attributes in conversational genres. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Switchboard: Telephone speech corpus for research and development", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Godfrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Holliman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jane", |
|
"middle": [], |
|
"last": "Mc-Daniel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of ICASSP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John J. Godfrey, Edward C. Holliman, and Jane Mc- Daniel. 1992. Switchboard: Telephone speech cor- pus for research and development. In Proceedings of ICASSP.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Streaming for large scale NLP: Language Modeling", |
|
"authors": [ |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suresh", |
|
"middle": [], |
|
"last": "Venkatasubramanian", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amit Goyal, Hal Daum\u00e9 III, and Suresh Venkatasubra- manian. 2009. Streaming for large scale NLP: Lan- guage Modeling. In Proceedings of NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Approximate nearest neighbors: towards removing the curse of dimensionality", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Indyk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajeev", |
|
"middle": [], |
|
"last": "Motwani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of STOC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Indyk and Rajeev Motwani. 1998. Approximate nearest neighbors: towards removing the curse of di- mensionality. In Proceedings of STOC.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Opinion spam and analysis", |
|
"authors": [ |
|
{ |
|
"first": "Nitin", |
|
"middle": [], |
|
"last": "Jindal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the International Conference on Web Search and Wed Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "219--230", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitin Jindal and Bing Liu. 2008. Opinion spam and anal- ysis. In Proceedings of the International Conference on Web Search and Wed Data Mining, pages 219-230.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Streambased randomised language models for smt", |
|
"authors": [ |
|
{ |
|
"first": "Abby", |
|
"middle": [], |
|
"last": "Levenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miles", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abby Levenberg and Miles Osborne. 2009. Stream- based randomised language models for smt. In Pro- ceedings of EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Approximate frequency counts over data streams", |
|
"authors": [ |
|
{ |
|
"first": "Gurmeet", |
|
"middle": [], |
|
"last": "Singh Manku", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajeev", |
|
"middle": [], |
|
"last": "Motwani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 28th international conference on Very Large Data Bases (VLDB)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gurmeet Singh Manku and Rajeev Motwani. 2002. Ap- proximate frequency counts over data streams. In Pro- ceedings of the 28th international conference on Very Large Data Bases (VLDB).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Counting large numbers of events in small registers", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Morris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1978, |
|
"venue": "Communications of the ACM", |
|
"volume": "21", |
|
"issue": "10", |
|
"pages": "840--842", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Morris. 1978. Counting large numbers of events in small registers. Communications of the ACM, 21(10):840-842.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Data streams: Algorithms and applications", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Muthu Muthukrishnan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Foundations and Trends in Theoretical Computer Science", |
|
"volume": "", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Muthu Muthukrishnan. 2005. Data streams: Algo- rithms and applications. Foundations and Trends in Theoretical Computer Science, 1(2).", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "From tweets to polls: Linking text sentiment to public opinion time series", |
|
"authors": [ |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Brendan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramnath", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Balasubramanyan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Routledge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of ICWSM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brendan O'Connor, Ramnath Balasubramanyan, Bryan R. Routledge, and Noah A. Smith. 2010. From tweets to polls: Linking text sentiment to public opinion time series. In Proceedings of ICWSM.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Finding deceptive opinion spam by any stretch of the imagination", |
|
"authors": [ |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Hancock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Myle Ott, Yejin Choi, Claire Cardie, and Jeffrey Han- cock. 2011. Finding deceptive opinion spam by any stretch of the imagination. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Streaming first story detection with application to twitter", |
|
"authors": [ |
|
{ |
|
"first": "Sasa", |
|
"middle": [], |
|
"last": "Petrovic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miles", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Lavrenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sasa Petrovic, Miles Osborne, and Victor Lavrenko. 2010. Streaming first story detection with application to twitter. In Proceedings of NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Classifying latent user attributes in twitter", |
|
"authors": [ |
|
{ |
|
"first": "Delip", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abhishek", |
|
"middle": [], |
|
"last": "Shreevats", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manaswi", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2nd International Workshop on Search and Mining Usergenerated Contents (SMUC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user at- tributes in twitter. In Proceedings of the 2nd In- ternational Workshop on Search and Mining User- generated Contents (SMUC).", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Randomized Algorithms and NLP: Using Locality Sensitive Hash Functions for High Speed Noun Clustering", |
|
"authors": [ |
|
{ |
|
"first": "Deepak", |
|
"middle": [], |
|
"last": "Ravichandran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Deepak Ravichandran, Patrick Pantel, and Eduard Hovy. 2005. Randomized Algorithms and NLP: Using Lo- cality Sensitive Hash Functions for High Speed Noun Clustering. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Randomized language models via perfect hash functions", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Talbot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Brants", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Talbot and Thorsten Brants. 2008. Randomized language models via perfect hash functions. In Pro- ceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Randomised language modelling for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Talbot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miles", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Talbot and Miles Osborne. 2007a. Randomised language modelling for statistical machine translation. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Smoothed Bloom filter language models: Tera-Scale LMs on the Cheap", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Talbot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miles", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Talbot and Miles Osborne. 2007b. Smoothed Bloom filter language models: Tera-Scale LMs on the Cheap. In Proceedings of EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Succinct approximate counting of skewed data", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Talbot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of IJCAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Talbot. 2009. Succinct approximate counting of skewed data. In Proceedings of IJCAI.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Probabilistic Counting with Randomized Storage", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashwin", |
|
"middle": [], |
|
"last": "Lall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of IJCAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Van Durme and Ashwin Lall. 2009. Proba- bilistic Counting with Randomized Storage. In Pro- ceedings of IJCAI.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Online Generation of Locality Sensitive Hash Signatures", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashwin", |
|
"middle": [], |
|
"last": "Lall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Van Durme and Ashwin Lall. 2010. Online Generation of Locality Sensitive Hash Signatures. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Efficient Online Locality Sensitive Hashing via Reservoir Counting", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashwin", |
|
"middle": [], |
|
"last": "Lall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Van Durme and Ashwin Lall. 2011. Effi- cient Online Locality Sensitive Hashing via Reservoir Counting. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Jerboa: A toolkit for randomized and streaming algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Human Language Technology Center of Excellence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Van Durme. 2012. Jerboa: A toolkit for randomized and streaming algorithms. Technical Re- port 7, Human Language Technology Center of Excel- lence, Johns Hopkins University.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Random sampling with a reservoir", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Vitter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "ACM Trans. Math. Softw", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "37--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey S. Vitter. 1985. Random sampling with a reser- voir. ACM Trans. Math. Softw., 11:37-57, March.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Accuracy on Switchboard gender classification, reported at every fifth utterance, using a dynamic log-linear model with 10-fold cross validation.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "Streaming analysis of eight randomly sampled speakers, four per gender (red-solid: female, bluedashed: male). Being a log-linear model, the decision boundary is marked at y = 0.5.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "Social media platforms such as Facebook or", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "defined ResUp. Algorithm 1 UPDATEAVERAGE(n, k, m, m * , \u03c3, g, s) Parameters: n : size of stream k : size of reservoir, also maximum value of s m : magnitude of update m * : maximum magnitude of all updates \u03c3 : sign of update g : granularity s : current value of reservoir 1: if m = 0 or \u03c3 = 0 then 2:Return without doing anything 3:", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"text": "and VanDurme and Lall (2009)", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF5": { |
|
"text": "(a) \u03c3 was positive with probability \u00b5; (b) m was drawn from [0, m * ]. Figure 5 shows results for varying reservoir sizes (using 4, 8 or 12 bits) when g = 100, m * = 100, and the length parameter was represented with an 8 bit Morris-style counter of base 2.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF6": { |
|
"text": "Frequency of individual feature weights observed over a full set of communications by a single example speaker. Most observed features have relatively small magnitude weight. The mean value is 1.3, with 1 1+e \u22121.3 = 0.79 > 0.5, which properly classifies the speaker as MALE.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF7": { |
|
"text": "MUA(n, k, m, m , \u03c3, g, s) 1: if m < m then2: Return UPDATEAVERAGE(n, k, m, m , \u03c3, g, s) 3: s := REWRITEHISTORY(n, k, m, m , \u03c3, g, s) 4: Return UPDATEAVERAGE(n, k, m , m , \u03c3, g, s ) Algorithm 3 REWRITEHISTORY(n, k, m, m , \u03c3, g, s) Parameters: o : overflow to be accommodated 1: o := m\u2212m 2m g 2: if \u03c3 > 0 then", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF8": { |
|
"text": "Comparison between using explicit counting and approximation on the Switchboard dataset, with bands reflecting 95% confidence.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF9": { |
|
"text": "Summed 0/1 loss over all utterances by each speaker in the Switchboard training set, across 10 splits. A value of m = 5 was on average that which minimized the number of mistakes made.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF10": { |
|
"text": "Comparison between using explicit counting and approximation, on the Twitter dataset, with bands reflecting 95% confidence.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Top ten features by gender.", |
|
"content": "<table/>" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Top thirty-five features by gender in Twitter. streaming algorithms community is beyond the scope of this work: interested readers are directed to Muthukrishnan (2005) as a starting point.", |
|
"content": "<table><tr><td>Male</td><td>obrigado (thank you [1M]), wife, my wife,</td></tr><tr><td/><td>bro, cansado (tired [1M]), gay, mate, dude,</td></tr><tr><td/><td>[@username] why, buddy, windows, album,</td></tr><tr><td/><td>dope, beer, [@username] yo, sir, ps3, comics,</td></tr><tr><td/><td>galera (folks/people), amigo (friend [2M]),</td></tr><tr><td/><td>man !, fuckin, omg omg, cheers, ai n't</td></tr><tr><td colspan=\"2\">Female obrigada (thank you [1F]), hubby, husband,</td></tr><tr><td/><td>cute, my husband, ?, cansada (tired [1F]),</td></tr><tr><td/><td>hair, dress, soooo, lovely, etsy, boyfriend,</td></tr><tr><td/><td>jonas, loved, book, sooo, girl, s\u00e9 (I),</td></tr><tr><td/><td>lindo (cute), shopping, amiga (friend [2F]),</td></tr><tr><td/><td>yummy, ppl, cupcakes</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |