Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q14-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:11:26.415024Z"
},
"title": "Dynamic Language Models for Streaming Text",
"authors": [
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tepper School of Business Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tepper School of Business Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Bryan",
"middle": [
"R"
],
"last": "Routledge",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tepper School of Business Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tepper School of Business Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tepper School of Business Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a probabilistic language model that captures temporal dynamics and conditions on arbitrary non-linguistic context features. These context features serve as important indicators of language changes that are otherwise difficult to capture using text data by itself. We learn our model in an efficient online fashion that is scalable for large, streaming data. With five streaming datasets from two different genreseconomics news articles and social media-we evaluate our model on the task of sequential language modeling. Our model consistently outperforms competing models.",
"pdf_parse": {
"paper_id": "Q14-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a probabilistic language model that captures temporal dynamics and conditions on arbitrary non-linguistic context features. These context features serve as important indicators of language changes that are otherwise difficult to capture using text data by itself. We learn our model in an efficient online fashion that is scalable for large, streaming data. With five streaming datasets from two different genreseconomics news articles and social media-we evaluate our model on the task of sequential language modeling. Our model consistently outperforms competing models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language models are a key component in many NLP applications, such as machine translation and exploratory corpus analysis. Language models are typically assumed to be static-the word-given-context distributions do not change over time. Examples include n-gram models (Jelinek, 1997) and probabilistic topic models like latent Dirichlet allocation (Blei et al., 2003) ; we use the term \"language model\" to refer broadly to probabilistic models of text.",
"cite_spans": [
{
"start": 267,
"end": 282,
"text": "(Jelinek, 1997)",
"ref_id": "BIBREF14"
},
{
"start": 347,
"end": 366,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, streaming datasets (e.g., social media) have attracted much interest in NLP. Since such data evolve rapidly based on events in the real world, assuming a static language model becomes unrealistic. In general, more data is seen as better, but treating all past data equally runs the risk of distracting a model with irrelevant evidence. On the other hand, cautiously using only the most recent data risks overfitting to short-term trends and missing important timeinsensitive effects (Blei and Lafferty, 2006; Wang et al., 2008) . Therefore, in this paper, we take steps toward methods for capturing long-range temporal dynamics in language use.",
"cite_spans": [
{
"start": 493,
"end": 518,
"text": "(Blei and Lafferty, 2006;",
"ref_id": "BIBREF1"
},
{
"start": 519,
"end": 537,
"text": "Wang et al., 2008)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our model also exploits observable context variables to capture temporal variation that is otherwise difficult to capture using only text. Specifically for the applications we consider, we use stock market data as exogenous evidence on which the language model depends. For example, when an important company's price moves suddenly, the language model should be based not on the very recent history, but should be similar to the language model for a day when a similar change happened, since people are likely to say similar things (either about that company, or about conditions relevant to the change). Non-linguistic contexts such as stock price changes provide useful auxiliary information that might indicate the similarity of language models across different timesteps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also turn to a fully online learning framework (Cesa-Bianchi and Lugosi, 2006) to deal with nonstationarity and dynamics in the data that necessitate adaptation of the model to data in real time. In online learning, streaming examples are processed only when they arrive. Online learning also eliminates the need to store large amounts of data in memory. Strictly speaking, online learning is distinct from stochastic learning, which for language models built on massive datasets has been explored by Hoffman et al. (2013) and Wang et al. (2011) . Those techniques are still for static modeling. Language modeling for streaming datasets in the context of machine translation was considered by Levenberg and Osborne (2009) and Levenberg et al. (2010) . Goyal et al. (2009) introduced a streaming algorithm for large scale language modeling by approximating ngram frequency counts. We propose a general online learning algorithm for language modeling that draws inspiration from regret minimization in sequential predictions (Cesa-Bianchi and Lugosi, 2006) and on-line variational algorithms (Sato, 2001; Honkela and Valpola, 2003) .",
"cite_spans": [
{
"start": 50,
"end": 81,
"text": "(Cesa-Bianchi and Lugosi, 2006)",
"ref_id": "BIBREF4"
},
{
"start": 504,
"end": 525,
"text": "Hoffman et al. (2013)",
"ref_id": "BIBREF11"
},
{
"start": 530,
"end": 548,
"text": "Wang et al. (2011)",
"ref_id": "BIBREF26"
},
{
"start": 696,
"end": 724,
"text": "Levenberg and Osborne (2009)",
"ref_id": "BIBREF17"
},
{
"start": 729,
"end": 752,
"text": "Levenberg et al. (2010)",
"ref_id": "BIBREF18"
},
{
"start": 755,
"end": 774,
"text": "Goyal et al. (2009)",
"ref_id": "BIBREF10"
},
{
"start": 1026,
"end": 1057,
"text": "(Cesa-Bianchi and Lugosi, 2006)",
"ref_id": "BIBREF4"
},
{
"start": 1093,
"end": 1105,
"text": "(Sato, 2001;",
"ref_id": "BIBREF22"
},
{
"start": 1106,
"end": 1132,
"text": "Honkela and Valpola, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To our knowledge, our model is the first to bring together temporal dynamics, conditioning on nonlinguistic context, and scalable online learning suitable for streaming data and extensible to include topics and n-gram histories. The main idea of our model is independent of the choice of the base language model (e.g., unigrams, bigrams, topic models, etc.) . In this paper, we focus on unigram and bigram language models in order to evaluate the basic idea on well understood models, and to show how it can be extended to higher-order n-grams. We leave extensions to topic models for future work.",
"cite_spans": [
{
"start": 312,
"end": 357,
"text": "(e.g., unigrams, bigrams, topic models, etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a novel task to evaluate our proposed language model. The task is to predict economicsrelated text at a given time, taking into account the changes in stock prices up to the corresponding day. This can be seen an inverse of the setup considered by Lavrenko et al. (2000) , where news is assumed to influence stock prices. We evaluate our model on economics news in various languages (English, German, and French), as well as Twitter data.",
"cite_spans": [
{
"start": 259,
"end": 281,
"text": "Lavrenko et al. (2000)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we first discuss the background for sequential predictions then describe how to formulate online language modeling as sequential predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Let w 1 , w 2 , . . . , w T be a sequence of response variables, revealed one at a time. The goal is to design a good learner to predict the next response, given previous responses and additional evidence which we denote by x t \u2208 R M (at time t). Throughout this paper, we use the term features for x. Specifically, at each round t, the learner receives x t and makes a prediction\u0175 t , by choosing a parameter vector \u03b1 t \u2208 R M . In this paper, we refer to \u03b1 as feature coefficients.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Predictions",
"sec_num": "2.1"
},
{
"text": "There has been an enormous amount of work on online learning for sequential predictions, much of it building on convex optimization. For a sequence of loss functions 1 , 2 , . . . , T (parameterized by \u03b1), an online learning algorithm is a strategy to minimize the regret, with respect to the best fixed \u03b1 * in hindsight. 1 Regret guarantees assume a Lipschitz con-1 Formally, the regret is defined as Regret T (\u03b1 * ) = dition on the loss function that can be prohibitive for complex models. See Cesa-Bianchi and Lugosi (2006) , Rakhlin (2009) , Bubeck (2011) , and Shalev-Shwartz (2012) for in-depth discussion and review.",
"cite_spans": [
{
"start": 496,
"end": 526,
"text": "Cesa-Bianchi and Lugosi (2006)",
"ref_id": "BIBREF4"
},
{
"start": 529,
"end": 543,
"text": "Rakhlin (2009)",
"ref_id": "BIBREF21"
},
{
"start": 546,
"end": 559,
"text": "Bubeck (2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Predictions",
"sec_num": "2.1"
},
{
"text": "There has also been work on online and stochastic learning for Bayesian models (Sato, 2001; Honkela and Valpola, 2003; Hoffman et al., 2013) , based on variational inference. The goal is to approximate posterior distributions of latent variables when examples arrive one at a time.",
"cite_spans": [
{
"start": 79,
"end": 91,
"text": "(Sato, 2001;",
"ref_id": "BIBREF22"
},
{
"start": 92,
"end": 118,
"text": "Honkela and Valpola, 2003;",
"ref_id": "BIBREF12"
},
{
"start": 119,
"end": 140,
"text": "Hoffman et al., 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Predictions",
"sec_num": "2.1"
},
{
"text": "In this paper, we will use both kinds of techniques to learn language models for streaming datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Predictions",
"sec_num": "2.1"
},
{
"text": "Consider an online language modeling problem, in the spirit of sequential predictions. The task is to build a language model that accurately predicts the texts generated on day t, conditioned on observable features up to day t, x 1:t . Every day, after the model makes a prediction, the actual texts w t are revealed and we suffer a loss. The loss is defined as the negative log likelihood of the model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2.2"
},
{
"text": "t = \u2212 log p(w t | \u03b1, \u03b2 1:t\u22121 , x 1:t\u22121 , n 1:t\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2.2"
},
{
"text": ", where \u03b1 and \u03b2 1:T are the model parameters and n is a background distribution (details are given in \u00a73.2). We can then update the model and proceed to day t + 1. Notice the similarity to the sequential prediction described above. Importantly, this is a realistic setup for building evolving language models from large-scale streaming datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2.2"
},
{
"text": "We index timesteps by t \u2208 {1, . . . , T } and word types by v \u2208 {1, . . . , V }, both are always given as subscripts. We denote vectors in boldface and use 1 : T as a shorthand for {1, 2, . . . , T }. We assume words of the form {w t } T t=1 for w t \u2208 R V , which is the vector of word frequences at timetstep t. Nonlinguistic context features are {x t } T t=1 for x t \u2208 R M . The goal is to learn parameters \u03b1 and \u03b2 1:T , which will be described in detail next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "3.1"
},
{
"text": "The main idea of our model is illustrated by the following generative story for the unigram language",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "P T t=1 t(xt, \u03b1t, wt) \u2212 inf\u03b1 * P T t=1 t(xt, \u03b1 * , wt).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "model. (We will discuss the extension to higher-order language models later.) A graphical representation of our proposed model is given in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 147,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "1. Draw feature coefficients \u03b1 \u223c N(0, \u03bbI). 2 Here \u03b1 is a vector in R M , where M is the dimensionality of the feature vector. 2. For each timestep t:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "(a) Observe non-linguistic context features x t . (b) Draw \u03b2 t \u223c N t\u22121 k=1 \u03b4 k exp(\u03b1 f (xt,x k )) P t\u22121 j=1 \u03b4 j exp(\u03b1 f (xt,x j )) \u03b2 k , \u03d5I .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "Here, \u03b2 t is a vector in R V , where V is the size of the word vocabulary, \u03d5 is the variance parameter and \u03b4 k is a fixed hyperparameter; we discuss them below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "(c) For each word w t,v , draw w t,v \u223c Categorical exp(n 1:t\u22121,v +\u03b2t,v) P j\u2208V exp(n 1:t\u22121,j +\u03b2 t,j ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "In the last step, \u03b2 t and n are mapped to the Vdimensional simplex, forming a distribution over words. n 1:t\u22121 \u2208 R V is a background (log) distribution, inspired by a similar idea in Eisenstein et al. (2011) . In this paper, we set n 1:t\u22121,v to be the logfrequency of v up to time t \u2212 1. We can interpret \u03b2 as a time-dependent deviation from the background log-frequencies that incorporates world-context. This deviation comes in the form of a weighted average of earlier deviation vectors.",
"cite_spans": [
{
"start": 183,
"end": 207,
"text": "Eisenstein et al. (2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "The intuition behind the model is that the probability of a word appearing at day t depends on the background log-frequencies, the deviation coefficients of the word at previous timesteps \u03b2 1:t\u22121 , and the similarity of current conditions of the world (based on observable features x) to previous timesteps through f (x t , x k ). That is, f is a function that takes ddimensional feature vectors at two timesteps x t and x k and returns a similarity vector f (x t , x k ) \u2208 R M (see \u00a76.1.1 for an example of f that we use in our experiments). The similarity is parameterized by \u03b1, and decays over time with rate \u03b4 k . In this work, we assume a fixed window size c (i.e., we consider c most recent timesteps), so that \u03b4 1:t\u2212c\u22121 = 0 and \u03b4 t\u2212c:t\u22121 = 1. This allows up to cth order dependencies. 3 Setting \u03b4 this way allows us to bound the 2 Feature coefficients \u03b1 can be also drawn from other distributions such as \u03b1 \u223c Laplace(0, \u03bb).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "3 In online Bayesian learning, it is known that forgetting inaccurate estimates from earlier timesteps is important (Sato, x Figure 1 : Graphical representation of the model. The subscript indices q, r, s are shorthands for the previous timesteps t \u2212 3, t \u2212 2, t \u2212 1. Only four timesteps are shown here. There are arrows from previous",
"cite_spans": [
{
"start": 116,
"end": 122,
"text": "(Sato,",
"ref_id": null
}
],
"ref_spans": [
{
"start": 125,
"end": 133,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "t x s x r x q w q w r w s w t t s r q \u21b5 N r N q N s N t T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "\u03b2 t\u22124 , \u03b2 t\u22125 , . . . , \u03b2 t\u2212c to \u03b2 t ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "where c is the window size as described in \u00a73.2. They are not shown here, for readability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "number of past vectors \u03b2 that need to be kept in memory. We set \u03b2 0 to 0. Although the generative story described above is for unigram language models, extensions can be made to more complex models (e.g., mixture of unigrams, topic models, etc.) and to longer n-gram contexts. In the case of topic models, the model will be related to dynamic topic models (Blei and Lafferty, 2006) augmented by context features, and the learning procedure in \u00a74 can be used to perform online learning of dynamic topic models. However, our model captures longer-range dependencies than dynamic topic models, and can condition on nonlinguistic features or metadata. In the case of higherorder n-grams, one simple way is to draw more \u03b2, one for each history. For example, for a bigram model, \u03b2 is in R V 2 , rather than R V in the unigram model. We consider both unigram and bigram language models in our experiments in \u00a76. However, the main idea presented in this paper is largely independent of the base model. Related work. Mimno and McCallum (2008) and Eisenstein et al. (2010) similarly conditioned text on observable features (e.g., author, publication venue, geography, and other document-level metadata), but conducted inference in a batch setting, thus their approaches are not suitable for streaming data. It is not immediately clear how to generalize their approach to dynamic settings. Algorithmically, our work comes closest to the online dynamic topic model of Iwata et al. (2010) , except that we also incorporate context features.",
"cite_spans": [
{
"start": 356,
"end": 381,
"text": "(Blei and Lafferty, 2006)",
"ref_id": "BIBREF1"
},
{
"start": 1008,
"end": 1033,
"text": "Mimno and McCallum (2008)",
"ref_id": "BIBREF20"
},
{
"start": 1038,
"end": 1062,
"text": "Eisenstein et al. (2010)",
"ref_id": "BIBREF8"
},
{
"start": 1456,
"end": 1475,
"text": "Iwata et al. (2010)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Story",
"sec_num": "3.2"
},
{
"text": "The goal of the learning procedure is to minimize the overall negative log likelihood,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Inference",
"sec_num": "4"
},
{
"text": "\u2212 log L(D) = \u2212 log d\u03b2 1:T p(\u03b2 1:T | \u03b1, x 1:T )p(w 1:T | \u03b2 1:T , n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Inference",
"sec_num": "4"
},
{
"text": "However, this quantity is intractable. Instead, we derive an upper bound for this quantity and minimize that upper bound. Using Jensen's inequality, the variational upper bound on the negative log likelihood is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Inference",
"sec_num": "4"
},
{
"text": "\u2212 log L(D) \u2264 \u2212 d\u03b2 1:T q(\u03b2 1:T | \u03b3 1:T ) (4) log p(\u03b2 1:T | \u03b1, x 1:T )p(w 1:T | \u03b2 1:T , n) q(\u03b2 1:T | \u03b3 1:T ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Inference",
"sec_num": "4"
},
{
"text": "Specifically, we use mean-field variational inference where the variables in the variational distribution q are completely independent. We use Gaussian distributions as our variational distributions for \u03b2, denoted by \u03b3 in the bound in Eq. 4. We denote the parameters of the Gaussian variational distribution for \u03b2 t,v (word v at timestep t) by \u00b5 t,v (mean) and \u03c3 t,v (variance). Figure 2 shows the functional form of the variational bound that we seek to minimize, denoted byB. The two main steps in the optimization of the bound are inferring \u03b2 t and updating feature coefficients \u03b1. We next describe each step in detail.",
"cite_spans": [],
"ref_spans": [
{
"start": 379,
"end": 387,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning and Inference",
"sec_num": "4"
},
{
"text": "The goal of the learning procedure is to minimize the upper bound in Figure 2 with respect to \u03b1. However, since the data arrives in an online fashion, and speed is very important for processing streaming datasets, the model needs to be updated at every timestep t (in our experiments, daily).",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 77,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "Notice that at timestep t, we only have access to x 1:t and w 1:t , and we perform learning at every timestep after the text for the current timestep w t is revealed. We do not know x t+1:T and w t+1:T . Nonetheless, we want to update our model so that it can make a better prediction at t + 1. Therefore, we can only minimize the bound until timestep t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "Let C k exp(\u03b1 f (xt,x k )) P t\u22121 j=t\u2212c exp(\u03b1 f (xt,x j ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": ". Our learning algorithm is a variational Expectation-Maximization algorithm (Wainwright and Jordan, 2008) . E-step Recall that we use variational inference and the variational parameters for \u03b2 are \u00b5 and \u03c3. As shown in Figure 2 , since the log-sum-exp in the last term of B is problematic, we introduce additional variational parameters \u03b6 to simplify B and obtain B (Eqs. 2-3). The E-step deals with all the local variables \u00b5, \u03c3, and \u03b6.",
"cite_spans": [
{
"start": 77,
"end": 106,
"text": "(Wainwright and Jordan, 2008)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 219,
"end": 227,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "Fixing other variables and taking the derivative of the boundB w.r.t. \u03b6 t and setting it to zero, we obtain the closed-form update for \u03b6 t :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "\u03b6 t = v\u2208V exp (n 1:t\u22121,v ) exp \u00b5 t,v + \u03c3t,v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "2 . To minimize with respect to \u00b5 t and \u03c3 t , we apply gradient-based methods since there are no closedform solutions. The derivative w.r.t. \u00b5 t,v is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "\u2202B \u2202\u00b5 t,v = \u00b5 t,v \u2212 C k \u00b5 k,v \u03d5 \u2212 n t,v + n t \u03b6 t exp (n 1:t\u22121,v ) exp \u00b5 t,v + \u03c3 t,v 2 , where n t = v\u2208V n t,v . The derivative w.r.t. \u03c3 t,v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "\u2202B \u2202\u03c3 t,v = 1 2\u03c3 t,v + 1 2\u03d5 + n t 2\u03b6 t exp (n 1:t\u22121,v ) exp \u00b5 t,v + \u03c3 t,v 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "Although we require iterative methods in the E-step, we find it to be reasonably fast in practice. 4 Specifically, we use the L-BFGS quasi-Newton algorithm (Liu and Nocedal, 1989) . We can further improve the bound by updating the variational parameters for timestep 1 : t \u2212 1, i.e., \u00b5 1:t\u22121 and \u03c3 1:t\u22121 , as well. However, this will require storing the texts from previous timesteps. Additionally, this will complicate the M-step update described",
"cite_spans": [
{
"start": 99,
"end": 100,
"text": "4",
"ref_id": null
},
{
"start": 156,
"end": 179,
"text": "(Liu and Nocedal, 1989)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "B = \u2212 T t=1 E q [log p(\u03b2 t | \u03b2 k , \u03b1, x t )] \u2212 T t=1 E q [log p(w t | \u03b2 t , n t )] \u2212 H(q) (1) = T t=1 \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 2 j\u2208V log \u03c3 t,j \u03d5 \u2212 E q \uf8ee \uf8ef \uf8f0\u2212 \u03b2 t \u2212 t\u22121 k=t\u2212c C k \u03b2 k 2 2\u03d5 \uf8f9 \uf8fa \uf8fb \u2212 E q \uf8ee \uf8f0 v\u2208wt n 1:t\u22121,v + \u03b2 t,v \u2212 log j\u2208V exp(n 1:t\u22121,j + \u03b2 t,j ) \uf8f9 \uf8fb \uf8fc \uf8f4 \uf8fd \uf8f4 \uf8fe (2) \u2264 T t=1 \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 2 j\u2208V log \u03c3 t,v \u03d5 + \u00b5 t \u2212 t\u22121 k=t\u2212c C k \u00b5 k 2 2\u03d5 + \u03c3 t + t\u22121 k=t\u2212c C 2 k \u03c3 k 2\u03d5 \u2212 v\u2208wt \uf8eb \uf8ed \u00b5 t,v \u2212 log \u03b6 t \u2212 1 \u03b6 t j\u2208V exp (n 1:t\u22121,j ) exp \u00b5 t,j + \u03c3 t,j 2 \uf8f6 \uf8f8 \uf8fc \uf8f4 \uf8fd \uf8f4 \uf8fe + const",
"eq_num": "(3)"
}
],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "Figure 2: The variational bound that we seek to minimize, B. H(q) is the entropy of the variational distribution q. The derivation from line 1 to line 2 is done by replacing the probability distributions p(\u03b2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "t | \u03b2 k , \u03b1, x t ) and p(w t | \u03b2 t , n t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "by their respective functional forms. Notice that in line 3 we compute the expectations under the variational distributions and further bound B by introducing additional variational parameters \u03b6 using Jensen's inequality on the log-sum-exp in the last term. We denote the new boundB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "below. Therefore, for each s < t, we choose to fix \u00b5 s and \u03c3 s once they are learned at timestep s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.1"
},
{
"text": "In the M-step, we update the global parameter \u03b1, fixing \u00b5 1:t . Fixing other parameters and taking the derivative ofB w.r.t. \u03b1, we obtain: 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-step",
"sec_num": null
},
{
"text": "\u2202B \u2202\u03b1 = (\u00b5 t \u2212 t\u22121 k=t\u2212c C k \u00b5 k )(\u2212 t\u22121 k=t\u2212c \u2202C k \u2202\u03b1 ) \u03d5 + t\u22121 k=t\u2212c C k \u03c3 k \u2202C k \u2202\u03b1 \u03d5 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-step",
"sec_num": null
},
{
"text": "where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-step",
"sec_num": null
},
{
"text": "\u2202C k \u2202\u03b1 =C k f (x t , x k ) \u2212C k t\u22121 s=t\u2212c f (x t , x s ) exp(\u03b1 f (x t , x s )) t\u22121 s=t\u2212c exp(\u03b1 f (x t , x s ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-step",
"sec_num": null
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-step",
"sec_num": null
},
{
"text": "We follow the convex optimization strategy and simply perform a stochastic gradient update: (Zinkevich, 2003) . While the variational boundB is not convex, given the local variables \u00b5 1:t 5 In our implementation, we augment \u03b1 with a squared L2 regularization term (i.e., we assume that \u03b1 is drawn from a normal distribution with mean zero and variance \u03bb) and use the FOBOS algorithm (Duchi and Singer, 2009) . The derivative of the regularization term is simple and is not shown here. Of course, other regularizers (e.g., the L1-norm, which we use for other parameters, or the L 1/\u221e -norm) can also be explored. and \u03c3 1:t , optimizing \u03b1 at timestep t without knowing the future becomes a convex problem. 6 Since we do not reestimate \u00b5 1:t\u22121 and \u03c3 1:t\u22121 in the E-step, the choice to perform online gradient descent instead of iteratively performing batch optimization at every timestep is theoretically justified.",
"cite_spans": [
{
"start": 92,
"end": 109,
"text": "(Zinkevich, 2003)",
"ref_id": "BIBREF27"
},
{
"start": 383,
"end": 407,
"text": "(Duchi and Singer, 2009)",
"ref_id": "BIBREF7"
},
{
"start": 704,
"end": 705,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "M-step",
"sec_num": null
},
{
"text": "\u03b1 t+1 = \u03b1 t + \u03b7 t \u2202B \u2202\u03b1t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-step",
"sec_num": null
},
{
"text": "Notice that our overall learning procedure is still to minimize the variational upper boundB. All these choices are made to make the model suitable for learning in real time from large streaming datasets. Preliminary experiments showed that performing more than one EM iteration per day does not considerably improve performance, so in our experiments we perform one EM iteration per day.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-step",
"sec_num": null
},
{
"text": "To learn the parameters of the model, we rely on approximations and optimize an upper boundB. We have opted for this approach over alternatives (such as MCMC methods) because of our interest in the online, large-data setting. Our experiments show that we are still able to learn reasonable parameter estimates by optimizingB. Like online variational methods for other latent-variable models such as LDA (Sato, 2001; Hoffman et al., 2013) , open questions remain about the tightness of such approximations and the identifiability of model parameters. We note, how-ever, that our model does not include latent mixtures of topics and may be generally easier to estimate.",
"cite_spans": [
{
"start": 403,
"end": 415,
"text": "(Sato, 2001;",
"ref_id": "BIBREF22"
},
{
"start": 416,
"end": 437,
"text": "Hoffman et al., 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "M-step",
"sec_num": null
},
{
"text": "As described in \u00a72.2, our model is evaluated by the loss suffered at every timestep, where the loss is defined as the negative log likelihood of the model on text at timestep w t . Therefore, at each timestep t, we need to predict (the distribution of) w t . In order to do this, for each word v \u2208 V , we simply compute the deviation means \u03b2 t,v as weighted combinations of previous means, where the weights are determined by the world-context similarity encoded in x:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction",
"sec_num": "5"
},
{
"text": "E q [\u03b2 t,v | \u00b5 t,v ] = t\u22121 k=t\u2212c exp(\u03b1 f (x t , x k )) t\u22121 j=t\u2212c exp(\u03b1 f (x t , x j )) \u00b5 k,v .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction",
"sec_num": "5"
},
{
"text": "Recall that the word distribution that we use for prediction is obtained by applying the operator \u03c0 that maps \u03b2 t and n to the V -dimensional simplex, forming a distribution over words: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction",
"sec_num": "5"
},
{
"text": "\u03c0(\u03b2 t , n 1:t\u22121 ) v = exp(n 1:t\u22121,v +\u03b2t,v) P j\u2208V exp(n 1:t\u22121,j +\u03b2 t,j ) , where n 1:t\u22121,v \u2208 R V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction",
"sec_num": "5"
},
{
"text": "In our experiments, we consider the problem of predicting economy-related text appearing in news and microblogs, based on observable features that reflect current economic conditions in the world at a given time. In the following, we describe our dataset in detail, then show experimental results on text prediction. In all experiments, we set the window size c = 7 (one week) or c = 14 (two weeks), \u03bb = 1 2|V | (V is the size of vocabulary of the dataset under consideration), and \u03d5 = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Our data contains metadata and text corpora. The metadata is used as our features, whereas the text corpora are used for learning language models and predictions. The dataset (excluding Twitter) can be downloaded at http://www.ark.cs.cmu. edu/DynamicLM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "6.1"
},
{
"text": "We use end-of-day stock prices gathered from finance.yahoo.com for each stock included in the Standard & Poor's 500 index (S&P 500). The index includes large (by market value) companies listed on US stock exchanges. 7 We calculate daily (continuously compounded) returns for each stock, o: r o,t = log P o,t \u2212 log P o,t\u22121 , where P o,t is the closing stock price. 8 We make a simplifying assumption that text for day t is generated after P o,t is observed. 9 In general, stocks trade Monday to Friday (except for federal holidays and natural disasters). For days when stocks do not trade, we set r o,t = 0 for all stocks since any price change is not observed.",
"cite_spans": [
{
"start": 216,
"end": 217,
"text": "7",
"ref_id": null
},
{
"start": 364,
"end": 365,
"text": "8",
"ref_id": null
},
{
"start": 457,
"end": 458,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metadata",
"sec_num": "6.1.1"
},
{
"text": "We transform returns into similarity values as follows: f (x o,t , x o,k ) = 1 iff sign(r o,t ) = sign(r o,k ) and 0 otherwise. While this limits the model by ignoring the magnitude of price changes, it is still reasonable to capture the similarity between two days. 10 There are 500 stocks in the S&P 500, so x t \u2208 R 500 and f (x t , x k ) \u2208 R 500 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metadata",
"sec_num": "6.1.1"
},
{
"text": "We have five streams of text data. The first four corpora are news streams tracked through Reuters. 11 Two of them are written in English, North American Business Report (EN:NA) and Japanese Investment News (EN:JP). The remaining two are German Economic News Service (DE, in German) and French Economic News Service (FR, in French) . For all four of the Reuters streams, we collected news data over a period of thirteen months (392 days), 2012-05-26 to 2013-06-21. See Table 1 for descriptive statistics of these datasets. Numerical terms are mapped to a single word, and all letters are downcased.",
"cite_spans": [
{
"start": 316,
"end": 331,
"text": "(FR, in French)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 469,
"end": 476,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Text data",
"sec_num": "6.1.2"
},
{
"text": "The last text stream comes from the Decahose/Gardenhose stream from Twitter. We collected public tweets that contain ticker symbols (i.e., symbols that are used to denote stocks of a particular company in a stock market), preceded by the dollar Dataset Total # Doc. Avg. # Doc. #Days .807 182 392 16,026,380 10,000 7,047,095 5,000 FR 62,355 160 392 11,942,271 10,000 3,773,517 5,000 DE 51,515 132 392 9,027,823 10,000 3,499,965 5,000 Twitter 214,794 336 639 1,660,874 10,000 551,768 5,000 sign $ (e.g., $GOOG, $MSFT, $AAPL, etc.). These tags are generally used to indicate tweets about the stock market. We look at tweets from the period 2011-01-01 to 2012-09-30 (639 days). As a result, we have approximately 100-800 tweets per day. We tokenized the tweets using the CMU ARK TweetNLP tools, 12 numerical terms are mapped to a single word, and all letters are downcased. We perform two experiments using unigram and bigram language models as the base models. For each dataset, we consider the top 10,000 unigrams after removing corpus-specific stopwords (the top 100 words with highest frequencies). For the bigram experiments, we only use 5,000 words to limit the number of unique bigrams so that we can simulate experiments for the entire time horizon in a reasonable amount of time. In standard open-vocabulary language modeling experiments, the treatment of unknown words deserves care. We have opted for a controlled, closed-vocabulary experiment, since standard smoothing techniques will almost surely interact with temporal dynamics and context in interesting ways that are out of scope in the present work.",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 494,
"text": ".807 182 392 16,026,380 10,000 7,047,095 5,000 FR 62,355 160 392 11,942,271 10,000 3,773,517 5,000 DE 51,515 132 392 9,027,823 10,000 3,499,965 5,000 Twitter 214,794 336 639 1,660,874",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Text data",
"sec_num": "6.1.2"
},
{
"text": "Since this is a forecasting task, at each timestep, we only have access to data from previous timesteps. Our model assumes that all words in all documents in a corpus come from a single multinomial distribution. Therefore, we compare our approach to the corresponding base models (standard unigram and bigram language models) over the same vocabulary (for each stream). The first one maintains counts of every word and updates the counts at each timestep. This corresponds to a base model that uses all of the available data up to the current timestep (\"base all\"). The second one replaces counts of every word with the counts from the previous timestep (\"base one\"). Additionally, we also compare with a base model whose counts decay exponentially (\"base exp\"). That is, the counts from previous timesteps decay by exp(\u2212\u03b3s), where s is the distance between previous timesteps and the current timestep and \u03b3 is the decay constant. We set the decay constant \u03b3 = 1. We put a symmetric Dirichlet prior on the counts (\"add-one\" smoothing); this is analogous to our treatment of the background frequencies n in our model. Note that our model, similar to \"base all,\" uses all available data up to timestep t \u2212 1 when making predictions for timestep t. The window size c only determines which previous timesteps' models can be chosen for making a prediction today. The past models themselves are estimated from all available data up to their respective timesteps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "6.2"
},
{
"text": "We also compare with two strong baselines: a linear interpolation of \"base one\" models for the past week (\"int. week\") and a linear interpolation of \"base all\" and \"base one\" (\"int one all\"). The interpolation weights are learned online using the normalized exponentiated gradient algorithm (Kivinen and Warmuth, 1997) , which has been shown to enjoy a stronger regret guarantee compared to standard online gradient descent for learning a convex combination of weights.",
"cite_spans": [
{
"start": 291,
"end": 318,
"text": "(Kivinen and Warmuth, 1997)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "6.2"
},
{
"text": "We evaluate the perplexity on unseen dataset to evaluate the performance of our model. Specifically, we use per-word predictive perplexity:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3"
},
{
"text": "perplexity = exp \u2212 T t=1 log p(w t | \u03b1, x 1:t , n 1:t\u22121 ) T t=1 j\u2208V w t,j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3"
},
{
"text": "Note that the denominator is the number of tokens up to timestep T . Lower perplexity is better. Table 2 and Table 3 show the perplexity results for",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 116,
"text": "Table 2 and Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3"
},
{
"text": "Dataset base all base one base exp int. week int. one all c = 7 c = 14 EN: NA 3,341 3,677 3,486 3,403 3,271 3,262 3,285 EN:JP 2,802 3,212 2,750 2,949 2,708 2,656 2,689 FR 3,603 3,910 3,678 3,625 3,416 3,404 3,438 DE 3,789 4,199 3,979 3,926 3,634 3,649 3,687 Twitter 3,880 6,168 5,133 5,859 4,047 3,801 3,819 Table 2 : Perplexity results for our five data streams in the unigram experiments. The base models in \"base all,\" \"base one,\" and \"base exp\" are unigram language models. \"int.",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 351,
"text": "NA 3,341 3,677 3,486 3,403 3,271 3,262 3,285 EN:JP 2,802 3,212 2,750 2,949 2,708 2,656 2,689 FR 3,603 3,910 3,678 3,625 3,416 3,404 3,438 DE 3,789 4,199 3,979 3,926 3,634 3,649 3,687 Twitter 3,880 6,168 5,133 5,859 4,047 3,801 3,819 Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3"
},
{
"text": "week\" is a linear interpolation of \"base one\" from the past week. \"int. one all\" is a linear interpolation of \"base one\" and \"base all\". The rightmost two columns are versions of our model. Best results are highlighted in bold. Table 3 : Perplexity results for our five data streams in the bigram experiments. The base models in \"base all,\" \"base one,\" and \"base exp\" are bigram language models. \"int.",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3"
},
{
"text": "week\" is a linear interpolation of \"base one\" from the past week. \"int. one all\" is a linear interpolation of \"base one\" and \"base all\". The rightmost column is a version of our model with c = 7. Best results are highlighted in bold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3"
},
{
"text": "each of the datasets for unigram and bigram experiments respectively. Our model outperformed other competing models in all cases but one. Recall that we only define the similarity function of world context as: f (x o,t , x o,k ) = 1 iff sign(r o,t ) = sign(r o,k ) and 0 otherwise. A better similarity function (e.g., one that takes into account market size of the company and the magnitude of increase or decrease in the stock price) might be able to improve the performance further. We leave this for future work. Furthermore, the variations can be captured using models from the past week. We discuss why increasing c from 7 to 14 did not improve performance of the model in more detail in \u00a76.4. We can also see how the models performed over time. Figure 4 traces perplexity for four Reuters news stream datasets. 13 We can see that in some cases the performance of the \"base all\" model degraded over time, whereas our model is more robust to temporal 13 In both experiments, in order to manage the time and space complexities of updating \u03b2, we apply a sparsity shrinkage technique by using OWL-QN (Andrew and Gao, 2007) when maximizing it, with regularization constant set to 1. Intuitively, this is equivalent to encouraging the deviation vector to be sparse (Eisenstein et al., 2011) .",
"cite_spans": [
{
"start": 817,
"end": 819,
"text": "13",
"ref_id": null
},
{
"start": 955,
"end": 957,
"text": "13",
"ref_id": null
},
{
"start": 1101,
"end": 1123,
"text": "(Andrew and Gao, 2007)",
"ref_id": "BIBREF0"
},
{
"start": 1264,
"end": 1289,
"text": "(Eisenstein et al., 2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 751,
"end": 759,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3"
},
{
"text": "shifts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3"
},
{
"text": "In the bigram experiments, we only ran our model with c = 7, since we need to maintain \u03b2 in R V 2 , instead of R V in the unigram model. The goal of this experiment is to determine whether our method still adds benefit to more expressive language models. Note that the weights of the linear interpolation models are also learned in an online fashion since there are no classical training, development, and test sets in our setting. Since the \"base one\" model performed poorly in this experiment, the performance of the interpolated models also suffered. For example, the \"int. one all\" model needed time to learn that the \"base one\" model has to be downweighted (we started with all interpolated models having uniform weights), so it was not able to outperform even the \"base all\" model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3"
},
{
"text": "It should not be surprising that conditioning on world-context reduces perplexity (Cover and Thomas, 1991) . A key attraction of our model, we believe, lies in the ability to inspect its parameters.",
"cite_spans": [
{
"start": 93,
"end": 106,
"text": "Thomas, 1991)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "6.4"
},
{
"text": "Deviation coefficients. Inspecting the model allows us to gain insight into temporal trends. We investigate the deviations learned by our model on the Twitter dataset. Examples are shown in Figure 3 . The left plot shows \u03b2 for four words related to Google: goog, #goog, @google, google+. For comparison, we also show the return of Google stock for the corresponding timestep (scaled by 50 and centered at 0.5 for readability, smoothed using loess (Cleveland, 1979) , denoted by r GOOG in the plot). We can see that significant changes of return of Google stocks (e.g., the r GOOG spikes between timesteps 50-100, 150-200, 490-550 in the plot) occurred alongside an increase in \u03b2 of Google-related words. Similar trends can also be observed for Microsoft-related words in the right plot. The most significant loss of return of Microsoft stocks (the downward spike near timestep 500 in the plot) is followed by a sudden sharp increase in \u03b2 of the words #microsoft and microsoft.",
"cite_spans": [
{
"start": 447,
"end": 464,
"text": "(Cleveland, 1979)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 190,
"end": 198,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "6.4"
},
{
"text": "Feature coefficients. We can also inspect the learned feature coefficients \u03b1 to investigate which stocks have higher associations with the text that is generated. Our feature coefficients are designed to reflect which changes (or lack of changes) in stock prices influence the word distribution more, not which stocks are talked about more often. We find that the feature coefficients do not correlate with obvious company characteristics like market capitalization (firm size). For example, on the Twitter dataset with bigram base models, the five stocks with the highest weights are: ConAgra Foods Inc., Intel Corp., Bristol-Myers Squibb, Frontier Communications Corp., and Amazon.com Inc. Strongly negative weights tended to align with streams with less activ- ity, suggesting that these were being used to smooth across all c days of history. A higher weight for stock o implies an increase in probability of choosing models from previous timesteps s, when the state of the world for the current timestep t and timestep s is the same (as represented by our similarity function) with respect to stock o (all other things being equal), and a decrease in probability for a lower weight. in the past they are at the time of use, aggregated across rounds on the EN:NA dataset, for window size c = 14. It shows that the model tends to favor models from days closer to the current date, with the t \u2212 1 models selected the most, perhaps because the state of the world today is more similar to dates closer to today compare to more distant dates. The plot also explains why increasing c from 7 to 14 did not improve performance of the model, since most of the variation in our datasets can be captured with models from the past week.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "6.4"
},
{
"text": "Topics. Latent topic variables have often figured heavily in approaches to dynamic language modeling. In preliminary experiments incorporating singlemembership topic variables (i.e., each document belongs to a single topic, as in a mixture of unigrams), we saw no benefit to perplexity. Incorporating topics also increases computational cost, since we must maintain and estimate one language model per topic, per timestep. It is straightforward to design models that incorporate topics with single-or mixedmembership as in LDA (Blei et al., 2003) , an interesting future direction.",
"cite_spans": [
{
"start": 527,
"end": 546,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "6.4"
},
{
"text": "Potential applications. Dynamic language models like ours can be potentially useful in many applications, either as a standalone language model, e.g., predictive text input, whose performance may depend on the temporal dimension; or as a component in applications like machine translation or speech recognition. Additionally, the model can be seen as a step towards enhancing text understanding with numerical, contextual data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "6.4"
},
{
"text": "We presented a dynamic language model for streaming datasets that allows conditioning on observable real-world context variables, exemplified in our experiments by stock market data. We showed how to perform learning and inference in an online fashion for this model. Our experiments showed the predictive benefit of such conditioning and online learning by comparing to similar models that ignore temporal dimensions and observable variables that influence the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Transactions of the Association for Computational Linguistics, 2 (2014) 181-192. Action Editor: Eric Fosler-Lussier. Submitted 10/2013; Revised 2/2014; Published 4/2014. c 2014 Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": ";Honkela and Valpola, 2003). Since we set \u03b41:t\u2212c\u22121 = 0, at every timestep t, \u03b4 k leads to forgetting older examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Approximately 16.5 seconds/day (walltime) to learn the model on the EN:NA dataset on a 2.40GHz CPU with 24GB memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As a result, our algorithm is Hannan consistent w.r.t. the best fixed \u03b1 (forB) in hindsight; i.e., the average regret goes to zero as T goes to \u221e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For a list of companies listed in the S&P 500 as of 2012, see http://en.wikipedia.org/wiki/List_ of_S\\%26P_500_companies. This set was fixed during the time periods of all our experiments.8 We use the \"adjusted close\" on Yahoo that includes interim dividend cash flows and also adjusts for \"splits\" (changes in the number of outstanding shares).9 This is done in order to avoid having to deal with hourly timesteps. In addition, intraday price data is only available through commercial data provided.10 Note that daily stock returns are equally likely to be positive or negative and display little serial correlation.11 http://www.reuters.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.ark.cs.cmu.edu/TweetNLP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank several anonymous reviewers for helpful feedback on earlier drafts of this paper and Brendan O'Connor for help with collecting Twitter data. This research was supported in part by Google, by computing resources at the Pittsburgh Supercomputing Center, by National Science Foundation grant IIS-1111142, AFOSR grant FA95501010247, ONR grant N000140910758, and by the Intelligence Advanced Research Projects Activity via Department of Interior National Business Center contract number D12PC00347. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/NBC, or the U.S. Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Scalable training of l 1 -regularized log-linear models",
"authors": [
{
"first": "Galen",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Galen Andrew and Jianfeng Gao. 2007. Scalable training of l 1 -regularized log-linear models. In Proc. of ICML.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Dynamic topic models",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "John",
"middle": [
"D"
],
"last": "Blei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei and John D. Lafferty. 2006. Dynamic topic models. In Proc. of ICML.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Introduction to online optimization",
"authors": [
{
"first": "S\u00e9bastien",
"middle": [],
"last": "Bubeck",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00e9bastien Bubeck. 2011. Introduction to online opti- mization. Technical report, Department of Operations Research and Financial Engineering, Princeton Univer- sity.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Prediction, Learning, and Games",
"authors": [
{
"first": "Nicol\u00f2",
"middle": [],
"last": "Cesa",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "G\u00e1bor",
"middle": [],
"last": "Lugosi",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicol\u00f2 Cesa-Bianchi and G\u00e1bor Lugosi. 2006. Prediction, Learning, and Games. Cambridge University Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Robust locally weighted regression and smoothing scatterplots",
"authors": [
{
"first": "William",
"middle": [
"S"
],
"last": "Cleveland",
"suffix": ""
}
],
"year": 1979,
"venue": "Journal of the American Statistical Association",
"volume": "74",
"issue": "368",
"pages": "829--836",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William S. Cleveland. 1979. Robust locally weighted regression and smoothing scatterplots. Journal of the American Statistical Association, 74(368):829-836.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Elements of Information Theory",
"authors": [
{
"first": "M",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Joy",
"middle": [
"A"
],
"last": "Cover",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thomas",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas M. Cover and Joy A. Thomas. 1991. Elements of Information Theory. John Wiley & Sons.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Efficient online and batch learning using forward backward splitting",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Machine Learning Research",
"volume": "10",
"issue": "7",
"pages": "2899--2934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi and Yoram Singer. 2009. Efficient online and batch learning using forward backward splitting. Journal of Machine Learning Research, 10(7):2899- 2934.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A latent variable model for geographic lexical variation",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Connor",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein, Brendan O'Connor, Noah A. Smith, and Eric P. Xing. 2010. A latent variable model for geographic lexical variation. In Proc. of EMNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sparse additive generative models of text",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Amr",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein, Amr Ahmed, and Eric P. Xing. 2011. Sparse additive generative models of text. In Proc. of ICML.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Streaming for large scale NLP: Language modeling",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Venkatasubramanian",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Goyal, Hal Daume III, and Suresh Venkatasubrama- nian. 2009. Streaming for large scale NLP: Language modeling. In Proc. of HLT-NAACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Stochastic variational inference",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Paisley",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Machine Learning Research",
"volume": "14",
"issue": "",
"pages": "1303--1347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Hoffman, David M. Blei, Chong Wang, and John Paisley. 2013. Stochastic variational inference. Jour- nal of Machine Learning Research, 14:1303-1347.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "On-line variational Bayesian learning",
"authors": [
{
"first": "Antti",
"middle": [],
"last": "Honkela",
"suffix": ""
},
{
"first": "Harri",
"middle": [],
"last": "Valpola",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ICA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antti Honkela and Harri Valpola. 2003. On-line varia- tional Bayesian learning. In Proc. of ICA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Online multiscale dynamic topic models",
"authors": [
{
"first": "Tomoharu",
"middle": [],
"last": "Iwata",
"suffix": ""
},
{
"first": "Takeshi",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Yasushi",
"middle": [],
"last": "Sakurai",
"suffix": ""
},
{
"first": "Naonori",
"middle": [],
"last": "Ueda",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomoharu Iwata, Takeshi Yamada, Yasushi Sakurai, and Naonori Ueda. 2010. Online multiscale dynamic topic models. In Proc. of KDD.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Statistical Methods for Speech Recognition",
"authors": [
{
"first": "Frederick",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederick Jelinek. 1997. Statistical Methods for Speech Recognition. MIT Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Exponentiated gradient versus gradient descent for linear predictors",
"authors": [
{
"first": "Jyrki",
"middle": [],
"last": "Kivinen",
"suffix": ""
},
{
"first": "Manfred",
"middle": [
"K"
],
"last": "Warmuth",
"suffix": ""
}
],
"year": 1997,
"venue": "Information and Computation",
"volume": "132",
"issue": "",
"pages": "1--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jyrki Kivinen and Manfred K. Warmuth. 1997. Expo- nentiated gradient versus gradient descent for linear predictors. Information and Computation, 132:1-63.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Mining of concurrent text and time series",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Lavrenko",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Schmill",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Lawrie",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Ogilvie",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Jensen",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allan",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of KDD Workshop on Text Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Lavrenko, Matt Schmill, Dawn Lawrie, Paul Ogilvie, David Jensen, and James Allan. 2000. Mining of concurrent text and time series. In Proc. of KDD Workshop on Text Mining.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Stream-based randomised language models for SMT",
"authors": [
{
"first": "Abby",
"middle": [],
"last": "Levenberg",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abby Levenberg and Miles Osborne. 2009. Stream-based randomised language models for SMT. In Proc. of EMNLP.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Stream-based translation models for statistical machine translation",
"authors": [
{
"first": "Abby",
"middle": [],
"last": "Levenberg",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abby Levenberg, Chris Callison-Burch, and Miles Os- borne. 2010. Stream-based translation models for sta- tistical machine translation. In Proc. of HLT-NAACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "On the limited memory BFGS method for large scale optimization",
"authors": [
{
"first": "C",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nocedal",
"suffix": ""
}
],
"year": 1989,
"venue": "Mathematical Programming B",
"volume": "45",
"issue": "3",
"pages": "503--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming B, 45(3):503-528.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Topic models conditioned on arbitrary features with Dirichletmultinomial regression",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of UAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Mimno and Andrew McCallum. 2008. Topic mod- els conditioned on arbitrary features with Dirichlet- multinomial regression. In Proc. of UAI.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Lecture notes on online learning",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Rakhlin",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Rakhlin. 2009. Lecture notes on online learn- ing. Technical report, Department of Statistics, The Wharton School, University of Pennsylvania.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Online model selection based on the variational bayes",
"authors": [
{
"first": "Masaaki",
"middle": [],
"last": "Sato",
"suffix": ""
}
],
"year": 2001,
"venue": "Neural Computation",
"volume": "13",
"issue": "7",
"pages": "1649--1681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masaaki Sato. 2001. Online model selection based on the variational bayes. Neural Computation, 13(7):1649- 1681.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Online learning and online convex optimization. Foundations and Trends in Machine Learning",
"authors": [
{
"first": "Shai",
"middle": [],
"last": "Shalev-Shwartz",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "4",
"issue": "",
"pages": "107--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shai Shalev-Shwartz. 2012. Online learning and online convex optimization. Foundations and Trends in Ma- chine Learning, 4(2):107-194.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Wainwright",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "1",
"issue": "",
"pages": "1--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin J. Wainwright and Michael I. Jordan. 2008. Graph- ical models, exponential families, and variational infer- ence. Foundations and Trends in Machine Learning, 1(1-2):1-305.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Continuous time dynamic topic models",
"authors": [
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Heckerman",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of UAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chong Wang, David M. Blei, and David Heckerman. 2008. Continuous time dynamic topic models. In Proc. of UAI.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Online variational inference for the hierarchical Dirichlet process",
"authors": [
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Paisley",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of AISTATS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chong Wang, John Paisley, and David M. Blei. 2011. On- line variational inference for the hierarchical Dirichlet process. In Proc. of AISTATS.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Online convex programming and generalized infinitesimal gradient ascent",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Zinkevich",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Zinkevich. 2003. Online convex programming and generalized infinitesimal gradient ascent. In Proc. of ICML.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "is a background distribution (the log-frequency of word v observed up to time t \u2212 1).",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Deviation coefficients \u03b2 over time for Google-and Microsoft-related words on Twitter with unigram base model (c = 7). Significant changes (increases or decreases) in the returns of Google and Microsoft stocks are usually followed by increases in \u03b2 of related words.",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "Distributions of the selection probabilities of models from the previous c = 14 timesteps, on the EN:NA dataset with unigram base model. For simplicity, we show E-step modes. The histogram shows that the model tends to favor models from days closer to the current date.",
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"uris": null,
"text": "Selected models. Besides feature coefficients, our model captures temporal shift by modeling similarity across the most recent c days. During inference, our model weights different word distributions from the past. The similarity is encoded in the pairwise features f (x t , x k ) and the parameters \u03b1.Figure 5shows the distributions of the strongest-posterior models from previous timesteps, based on how far Perplexity over time for four Reuters news streams (c = 7) with bigram base models.",
"type_str": "figure"
},
"TABREF1": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Statistics about the datasets. Average number of documents (third column) is per day."
}
}
}
}