|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T11:27:14.647608Z" |
|
}, |
|
"title": "Learning VAE-LDA Models with Rounded Reparameterization Trick", |
|
"authors": [ |
|
{ |
|
"first": "Runzhi", |
|
"middle": [], |
|
"last": "Tian", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "EECS University of Ottawa", |
|
"location": { |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Yongyi", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Ottawa", |
|
"location": { |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Richong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Beihang University", |
|
"location": { |
|
"country": "China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The introduction of VAE provides an efficient framework for the learning of generative models, including generative topic models. However, when the topic model is a Latent Dirichlet Allocation (LDA) model, a central technique of VAE, the reparameterization trick, fails to be applicable. This is because no reparameterization form of Dirichlet distributions is known to date that allows the use of the reparameterization trick. In this work, we propose a new method, which we call Rounded Reparameterization Trick (RRT), to reparameterize Dirichlet distributions for the learning of VAE-LDA models. This method, when applied to a VAE-LDA model, is shown experimentally to outperform the existing neural topic models on several benchmark datasets and on a synthetic dataset.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The introduction of VAE provides an efficient framework for the learning of generative models, including generative topic models. However, when the topic model is a Latent Dirichlet Allocation (LDA) model, a central technique of VAE, the reparameterization trick, fails to be applicable. This is because no reparameterization form of Dirichlet distributions is known to date that allows the use of the reparameterization trick. In this work, we propose a new method, which we call Rounded Reparameterization Trick (RRT), to reparameterize Dirichlet distributions for the learning of VAE-LDA models. This method, when applied to a VAE-LDA model, is shown experimentally to outperform the existing neural topic models on several benchmark datasets and on a synthetic dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Probabilistic generative models are widely used in topic modelling and have achieved great success in many applications (Deerwester et al., 1990) (Hofmann, 1999) (Blei et al., 2003) (Blei and Lafferty, 2006) . A landmark of topic models is Latent Dirichlet Allocation (LDA) (Blei et al., 2003) , where a document is treated as a bag of words and each word is modelled via a generative process. More specifically, in this generative process, a topic distribution is first drawn from a Dirichlet prior, then a topic is sampled from the topic distribution and a word is drawn subsequently from the word distribution corresponding to the drawn topic. Since its introduction, LDA has shown great power in a large varieties of natural language applications (Wei and Croft, 2006) (AlSumait et al., 2008) (Mehrotra et al., 2013) . However, the classical methods of learning LDA, such as variational techniques and collapsed Gibbs sampling, entails high computation complexity in posterior inference (Blei et al., 2003) (Grif-fiths and Steyvers, 2004) , which limits the ability of LDA on modelling large corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 145, |
|
"text": "(Deerwester et al., 1990)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 146, |
|
"end": 161, |
|
"text": "(Hofmann, 1999)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 181, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 207, |
|
"text": "(Blei and Lafferty, 2006)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 293, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 751, |
|
"end": 772, |
|
"text": "(Wei and Croft, 2006)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 773, |
|
"end": 796, |
|
"text": "(AlSumait et al., 2008)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 797, |
|
"end": 820, |
|
"text": "(Mehrotra et al., 2013)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 991, |
|
"end": 1010, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1011, |
|
"end": 1042, |
|
"text": "(Grif-fiths and Steyvers, 2004)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Variational AutoEncoder (VAE) or AutoEncoding Variational Bayes (AEVB) (Kingma and Welling, 2013) provides another choice of learning a generative model. Under the VAE framework, a generative model is specified by first drawing a latent vector z from a prior distribution and then transforming this vector through a neural network, called decoder, which subsequently generates the observation x. Using a variational inference approach, VAE couples the decoder network with another network, called encoder, responsible for computing the posterior distribution of the latent variable z for each observation x. A key technique of VAE is its \"reparameterization trick\", in which sampling from the posterior is performed by sampling a noise variable from some distribution p( ) and then transforming to z using a differentiable function. This technique allows the model to be trained efficiently using back propagation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The VAE framework significantly alleviates the computational burden of learning a generative model. Therefore, researchers interested in topic modelling are naturally motivated to consider VAE as an alternative approach to learn LDA, exploiting the power and efficiency of deep learning neural networks. This is also the interest of this paper. However, the key limitation in the application of VAE to Dirichlet-based topic models is that the original reparameterization trick in VAE is not applicable to Dirichlet distributions. In this sense, VAE cannot be directly used for learning any Dirichlet-based topic models. To cope with this, the NVDM model (Miao et al., 2016) discards the Dirichlet assumption and build neural topic models based on Gaussian prior. Although such a Gaussian-based topic model achieves a reasonably good performance on perplexity, the topic words they extracted appear to lack human-interpretability. Additionally the use of Gaussian prior significantly deviates from the desired Dirichlet distribution and arguably has significant room for improvement.", |
|
"cite_spans": [ |
|
{ |
|
"start": 654, |
|
"end": 673, |
|
"text": "(Miao et al., 2016)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The adoption of the Dirichlet prior plays a central role in topic modelling, since it nicely captures the intuition that a topic is sampled from a sparse topic distribution. Due to the importance of the Dirichlet assumption in topic modelling, ProdLDA (Srivastava and Sutton, 2017) attempts to apply VAE to LDA by constructing a Laplace approximation to the Dirichlet prior in the softmax basis. However, the Laplace approximation is only used to estimate the prior parameters and ProdLDA has essentially a Gaussian VAE architecture where the KL divergence is on Gaussian distributions. The work of (Joo et al., 2019) argues that the Laplace approximation in ProdLDA fails to capture the multimodality nature of Dirichlet distributions. They then propose DirVAE, in which an approximation of the inverse Gamma CDF (Knowles, 2015) is used to reparameterize Gamma distributions. The Dirichlet samples are then constructed by normalizing Gamma random variables. However, the approximation of inverse Gamma CDF is accurate only when the shape parameter of the Gamma distribution is much less than 1 (Knowles, 2015). This in turn limits the application scope of DirVAE.", |
|
"cite_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 281, |
|
"text": "(Srivastava and Sutton, 2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 599, |
|
"end": 617, |
|
"text": "(Joo et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we develop a technique, which we call the Rounded Reparameterization Trick (RRT), to reparameterize Dirichlet distributions. The use of RRT enables VAE as an efficient method for learning LDA, based on which we propose a new neural topic model, referred to as \"RRT-VAE\". 1 Experiments on several datasets show that RRT-VAE outperforms NVDM, ProdLDA, and DirVAE. The experimental results strongly demonstrate the applicability of RRT in topic modelling that utilizes VAE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we refer to LDA broadly as a generative model characterized by first drawing a distribution \u03b8 over k topics from a Dirichlet prior Dir (\u03b8|\u03b1) and then through a function f dec , or a decoder, transforming \u03b8 to a distribution P over a vocabulary of n words. That is, \u03b8 \u223c Dir (\u03b8|\u03b1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "P := f dec (\u03b8; \u03b2) (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where \u03b2 is the parameter of the decoder and will be treated as a k \u00d7 n matrix throughout this paper, although other options are also possible. Under this model, the words in a document is regarded as being drawn i.i.d from this distribution P .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In the classical LDA model (Blei et al., 2003) , each row of \u03b2 represents a word distribution, and the decoder can be written as", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 46, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "f dec (\u03b8; \u03b2) = \u03b8 T \u03b2", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "LDA", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In the deep learning paradigm, the decoder may be constructed differently, for example,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "f dec (\u03b8) = \u03b8 T Softmax (\u03b2)", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "LDA", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "f dec (\u03b8) = Softmax \u03b8 T \u03b2 (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where in both cases, the rows of \u03b2 are unconstrained. Note that (4), presented in (Srivastava and Sutton, 2017) is merely a different parameterization of (3) and will be referred to as the \"standard decoder\" in this paper. The structure in (5), referred to as \"product of experts\" in (Srivastava and Sutton, 2017) , will be called \"prod decoder\" for simplicity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 111, |
|
"text": "(Srivastava and Sutton, 2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 313, |
|
"text": "(Srivastava and Sutton, 2017)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The difficulty in learning an LDA model lies in the exact inference of \u03b8. In the classical LDA, exact inference is replaced by approximation methods using a symbolist variational method (Blei et al., 2003) or MCMC (Griffiths and Steyvers, 2004) . In the deep learning era, the development of Variational AutoEncoder (Kingma and Welling, 2013), a connectionist counterpart of the symbolist variational methods, provides an alternative approach to handle this difficulty. When applying VAE to an LDA model, the model is augmented with an encoder network f enc . Specifically, the encoder takes as the input the bagof-words (i.e., word histogram) representation x of a document and outputs a k-dimensional parameter \u03b1, and then the Dirichlet distribution with parameter \u03b1 is taken as the posterior distribution q(\u2022|\u03b1) of \u03b8:", |
|
"cite_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 205, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 244, |
|
"text": "(Griffiths and Steyvers, 2004)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VAE-LDA", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b1 := f enc (x; \u03a0) (6) q(\u2022|\u03b1) := Dir(\u2022|\u03b1)", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "VAE-LDA", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "where \u03a0 denotes the parameters of the encoder. Under the VAE framework, the parameters of the encoder and the decoder are jointly optimized by minimizing the negative Evidence Lower Bound (ELBO):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VAE-LDA", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "L(\u03a0, \u03b2; x) = KL (q(\u03b8|\u03b1)||p(\u03b8|\u03b1))\u2212E q(\u03b8|\u03b1) [J(\u03b8, x)]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VAE-LDA", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "(8) where p(\u03b8|\u03b1) := Dir(\u03b8|\u03b1), the Dirichlet prior; and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VAE-LDA", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "J(\u03b8, x) := x T log f dec (\u03b8)", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "VAE-LDA", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We refer to the model specified by the loss function (8) as VAE-LDA. Note that the KL term in (8) has a closed-form expression", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VAE-LDA", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "KL(q(\u03b8|\u03b1)||p(\u03b8|\u03b1)) = log \u0393 \u03b1 i \u2212 log \u0393(\u03b1 i ) \u2212 log \u0393 \u03b1 i + log \u0393(\u03b1 i ) + (\u03b1 i \u2212\u03b1 i ) \u03c8(\u03b1 i ) \u2212 \u03c8 \u03b1 i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VAE-LDA", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The gradient of this term can be obtained directly. The optimization of the second term in (8) is however challenging, since it has no closed-form expression. Additionally, when using a stochastic approximation, one must deal with back-propagating gradient signals through a sampling process. One way to deal with this is to use a score function estimator (Williams, 1992) (Glynn, 1990) . But such an approach is known to give rise to high variances in the gradient estimation, due to which a reliable estimate would require drawing a large number of \u03b8 from the posterior q(\u2022|\u03b1) and make learning inefficient. In the framework of VAE, a \"reparameterization trick\" is introduced as an elegant solution to such a problem, where the posterior is reparameterized as drawing a noise from another distribution and re-expressing the posterior as a differentiable function of the noise. However when the posterior distribution is a Dirichlet distribution (or a related distribution such as Beta and Gamma distributions), no such noise distribution and continuous functions are known to exist. Thus the standard reparameterization trick does not apply to learning VAE-LDA.", |
|
"cite_spans": [ |
|
{ |
|
"start": 356, |
|
"end": 372, |
|
"text": "(Williams, 1992)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 373, |
|
"end": 386, |
|
"text": "(Glynn, 1990)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VAE-LDA", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "To tackle the limitation of the standard reparameterization trick, we propose a new reparameterization method, referred to as rounded reparameterization trick or RRT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Given a real number \u2206, we define a \"\u2206rounding\" function \u2022 \u2206 as follows: For any real number a,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "a \u2206 = a \u2206 \u2022 \u2206 (10)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where the operation \u2022 is the integer floor (or \"rounding down\") operation. For example, 3.14159265 \u2206=0.001 = 3.141. When the \u2206rounding operation applies to a vector, it acts on the vector component-wise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In RRT, we draw an auxiliary variable\u03b8 from a \"rounded\" posterior distribution q", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b8 | \u03b1 \u2206 , \u03b8 \u223c q \u03b8 | \u03b1 \u2206", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "and compute", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b8 = g(\u03b8; \u03b1) :=\u03b8 + \u03bb (\u03b1 \u2212 \u03b1 \u2206 )", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Then \u03b8 is used to approximate \u03b8 \u223c q(\u03b8|\u03b1). In 12, the parameter \u03bb is a hyper parameter which will serve to adjust the strength of the gradient. Note that when choosing a very small rounding precision \u2206, we expect that the distribution q(\u2022|\u03b1) of \u03b8 and the distribution q(\u2022|\u03b1) are nearly identical. As a consequence,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "E q(\u03b8|\u03b1) [J(\u03b8, x)] and its replacement E q(\u03b8|\u03b1) [J(\u03b8, x)]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "are also very close to each other. Thus such a replacement keeps the loss function very close to the original loss in (8). For shorter notations, we denote", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "A(\u03b1) := E q(\u03b8|\u03b1) [J(\u03b8, x)] (13) A(\u03b1) := E q(\u03b8|\u03b1) [J(\u03b8, x)]", |
|
"eq_num": "(14)" |
|
} |
|
], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "L(\u03a0, \u03b2; x) := KL (q(\u03b8|\u03b1)||p(\u03b8|\u03b1)) \u2212 A(\u03b1) (15)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Constructing gradient estimator using RRT", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The gradient \u2207 \u03b1 A(\u03b1) can be expressed as a sum of two terms:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2207 \u03b1 A(\u03b1) =\u2207 \u03b1 E q(\u03b8| \u03b1 \u2206) J g(\u03b8, \u03b1), x =\u2207 \u03b1 q \u03b8 | \u03b1 \u2206 J g(\u03b8; \u03b1), x d\u03b8 = \u2207 \u03b1 q \u03b8 | \u03b1 \u2206 J g(\u03b8; \u03b1), x d\u03b8 + q \u03b8 | \u03b1 \u2206 \u2207 \u03b1 J g(\u03b8; \u03b1), x d\u03b8", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The first term in sum is usually estimated through the score function estimator. But this is unnecessary in this case. To see this, note that \u2207 \u03b1 \u03b1 \u2206 = 0 almost everywhere. This implies that the first term is in fact 0 at every \u03b1 for which the gradient exists. The next lemma then immediately follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Lemma 1 For any \u03b1 at which the gradient", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2207 \u03b1 A(\u03b1) exists, \u2207 \u03b1 A(\u03b1) = \u03bbE q(\u03b8| \u03b1 \u2206) \u2207 \u03b8 J(\u03b8, x)| \u03b8=g(\u03b8;\u03b1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The fact that the score function estimator is not needed for estimating the gradient \u2207 \u03b1 A(\u03b1) allows RRT to enjoy a low variance and hence requires very few samples in Monte-Carlo estimation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Using Lemma 1, one can directly express the stochastic (Monte Carlo) estimate of the gradient", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2207 \u03b1 A(\u03b1) as \u2207 \u03b1 A(\u03b1) \u2248 \u03bb N N i=1 \u2207 \u03b8 J(\u03b8, x)| \u03b8=g(\u03b8 i ;\u03b1) (16) where\u03b8 \u223c q \u03b8 | \u03b1 \u2206 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The fact that g is differentiable almost everywhere with respect to \u03b1 allows the gradient signal to back propagate and can be implemented using automatic differentiation libraries. Due to the low variance in this estimator, it is sufficient to sample only a single\u03b8 from q \u03b8 | \u03b1 \u2206 , namely, take N = 1 in (16).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "At this end, we conclude that the loss function L obtained by replacing \u03b8 with \u03b8 is very close to the original loss function L, and a low-variance gradient estimator can be easily constructed from L. This completes the description of RRT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rounded Reparameterization Trick", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Notably the \u2206-rounding function in RRT induces discontinuities in the resulting loss function L. This is because A(\u03b1) is discontinuous in \u03b1 and countably many discontinuity points exist. One may be concerned with whether an update of \u03b1 may \"hop over\" a discontinuity point of A(\u03b1) and cause training unstable or diverge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "On the discontinuities induced by RRT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To that end, we have the following result.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "On the discontinuities induced by RRT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Lemma 2 Suppose that J is \u03b6-lipschitz in \u03b8 and A(\u03b1) is \u03b3-lipschitz in \u03b1. Then for any integer m,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "On the discontinuities induced by RRT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A(m\u2206) \u2212 A(m\u2206 \u2212 ) < (\u03b3 + \u03b6\u03bb)\u2206 when \u2192 \u2206.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "On the discontinuities induced by RRT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We note that when \u2192 \u2206, the quantity A(m\u2206) \u2212 A(m\u2206 \u2212 ) measures the magnitude of a sudden rise or drop when an update hops over the discontinuity point \u03b1 = m\u2206. When this magnitude is small, the discontinuity causes little impact on the stability of training. The upper bound of this quantity given by this lemma suggests that as long as J(\u03b8) and the objective function A(\u03b1) are reasonably smooth, one may control this magnitude to be small by choosing a relatively small \u2206. On the other hand, in case one indeed chooses a relatively large \u2206, the bound of this magnitude may become quite large. However in this case, the update will have much smaller chance of hopping over a discontinuity point, and one still expects no serious problem caused by these discontinuities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "On the discontinuities induced by RRT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We now present the proof. Proof: Clearly, A(m\u2206) = A(m\u2206). And", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "On the discontinuities induced by RRT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A(m\u2206 \u2212 ) =E q(\u03b8|(m\u22121)\u2206) J(\u03b8 + \u03bb(\u2206 \u2212 )) \u2248E q(\u03b8|(m\u22121)\u2206) J(\u03b8) + \u03bb(\u2206 \u2212 )J (\u03b8) =A((m \u2212 1)\u2206) + \u03bb(\u2206 \u2212 ) \u2022 E q(\u03b8|(m\u22121)\u2206) J (\u03b8) Since J is \u03b6-lipschitz, A((m \u2212 1)\u2206) \u2212 \u03b6\u03bb(\u2206 \u2212 ) < A(m\u2206 \u2212 ) < A((m \u2212 1)\u2206) + \u03b6\u03bb(\u2206 \u2212 ) It follows A(m\u2206) \u2212 A((m \u2212 1)\u2206) + \u03b6\u03bb(\u2206 \u2212 ) > A(m\u2206) \u2212 A(m\u2206 \u2212 ) > A(m\u2206) \u2212 A((m \u2212 1)\u2206) \u2212 \u03b6\u03bb(\u2206 \u2212 ) Since A(\u2022) is \u03b3-lipschitz, then \u03b3\u2206 + \u03b6\u03bb(\u2206 \u2212 ) > A(m\u2206) \u2212 A(m\u2206 \u2212 ) > \u2212\u03b3\u2206 \u2212 \u03b6\u03bb(\u2206 \u2212 ) It follows A(m\u2206) \u2212 A(m\u2206 \u2212 ) < (\u03b3 + \u03b6\u03bb)\u2206", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "On the discontinuities induced by RRT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This proves the lemma.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "On the discontinuities induced by RRT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2 It is clear that when \u2206 is small, the discontinuity is not obvious and has small impact on the optimization of the model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "On the discontinuities induced by RRT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Beyond topic modelling, another theme of research related to this work is the estimation of gradient in neural networks containing stochastic nodes or samplers. In this setting, one desires that the gradient signal is capable of back-propagating through the samplers. A classical method for this purpose is to construct a score function estimator, also known as the \"log derivative trick\" or REINFORCE (Williams, 1992) (Glynn, 1990) . However, despite giving an unbiased estimate, the Monte-Carlo implementation of such an estimator typically suffers from a high variance, and thus relies on some additional variance-reduction techniques (Greensmith et al., 2004) . Reparameterization trick(Kingma and Welling, 2013), as mentioned above, may also be used to back-propagate gradients through samples and enjoys a low-variance advantage. Unfortunately this technique is not applicable to many distributions such as Gamma, Beta and Dirichlet distributions. Various efforts have been spent on extending the applicability of reparameterization trick to a broader range. These works include, for example, G-REP (Ruiz et al., 2016), RSVI (Naesseth et al., 2016) and Implicit Reparameterization Gradients (Figurnov et al., 2018) , etc. These methods usually involve complicated gradient derivations and are often difficult to implement in neural networks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 402, |
|
"end": 418, |
|
"text": "(Williams, 1992)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 419, |
|
"end": 432, |
|
"text": "(Glynn, 1990)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 638, |
|
"end": 663, |
|
"text": "(Greensmith et al., 2004)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1131, |
|
"end": 1154, |
|
"text": "(Naesseth et al., 2016)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1197, |
|
"end": 1220, |
|
"text": "(Figurnov et al., 2018)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To quantitatively evaluate RRT-VAE, we conduct experiments on synthetic datasets and five realworld datasets. Our model is compared with several existing topic models: Online LDA (Hoffman et al., 2010) , NVDM (Miao et al., 2016) , ProdLDA (Srivastava and Sutton, 2017) and DirVAE (Joo et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 201, |
|
"text": "(Hoffman et al., 2010)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 228, |
|
"text": "(Miao et al., 2016)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 280, |
|
"end": 298, |
|
"text": "(Joo et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the experiments, we adopt three MLPs with ReLU activations as the encoder of RRT-VAE, where each hidden layer is set to 500 dimensions. We apply an exponential function on the outputs of the encoder, so that the outputs are positive values. The topic distribution vectors are sampled through RRT and then normalized before being passed to the decoder. For Online LDA, we use the standard implementation from scikit-learn (Pedregosa et al., 2011) . The encoder structures of NVDM, ProdLDA and DirVAE are built according to (Miao et al., 2016) , (Srivastava and Sutton, 2017) and (Joo et al., 2019) , where in our experiments the dimension of each hidden layer is set to 500.", |
|
"cite_spans": [ |
|
{ |
|
"start": 424, |
|
"end": 448, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 525, |
|
"end": 544, |
|
"text": "(Miao et al., 2016)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 547, |
|
"end": 576, |
|
"text": "(Srivastava and Sutton, 2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 581, |
|
"end": 599, |
|
"text": "(Joo et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "On the real-world datasets, we adopt the prod decoder, since the standard decoder appears to extract many repetitive topic words (see Appendix B.1). 2 2 As reported in (Srivastava and Sutton, 2017) , ProdLDA also appears to extract many repetitive words when using the On the synthetic datasets, we adopt the standard decoder, which is examined to be superior to the prod decoder on this learning task (see Appendix A.1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 197, |
|
"text": "(Srivastava and Sutton, 2017)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Synthetic datasets. We construct three synthetic datasets based on the LDA generative process: a 30 \u00d7 500 topic-word probability matrix \u03b2 g is generated as the ground truth; each dataset is then generated based on \u03b2 g using different Dirichlet priors \u03b1 g \u20221 \u2208 R 30 , where 1 denotes the all-one vector. We set \u03b1 g to [0.01, 0.05, 0.1] for the three datasets and the vocabulary size to 500. Each dataset has 20000 training examples. Real-world datasets. We use five real-world datasets in our experiments: 20NG, RCV1-v2, 3 AGNews 4 , DBPeida (Lehmann et al., 2015) , and Yelp review polarity (Zhang et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 541, |
|
"end": 563, |
|
"text": "(Lehmann et al., 2015)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 611, |
|
"text": "(Zhang et al., 2015)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The 20NG and RCV1-v2 datasets are the same as (Miao et al., 2016) . The other three datasets are preprocessed through tokenizing, stemming, lemmatizing and the removal of stop words. We keep the most frequent 2000 words in DBPedia and Yelp. For AGNews, we keep the words which are contained in no more than half the documents and are contained in at least 15 documents. The statistics of the cleaned datasets are summarized in ", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 65, |
|
"text": "(Miao et al., 2016)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "On the real-world datasets, we use perplexity and normalized pointwise mutual information (NPMI) (Lau et al., 2014) as the evaluation metrics. On synthetic datasets, we propose topic words recovery accuracy (or \"recovery accuracy\" in short) to evaluate the model performance. Specifically, we extract the top-10 highestprobability word indexes from each row of \u03b2 g . The standard decoder.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Methods", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "extracted word indexes constitute a 30 \u00d7 10 topicword matrix T g . Our goal is to use the topic models to recover this matrix. Denote by T L , a matrix extracted from the learned \u03b2 matrix of a model. Note that the rows of T L are arbitrarily ordered. To count how many words in the ith row t", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Methods", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "(i) g of T g is recovered in a topic in T L , we compare t (i)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Methods", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "g with each row in T L . We count the number of common words in the compared two rows and keep the maximum count as the number of recovered words in t (i) g . The recovery accuracy is then defined as the total number of recovered words in all rows of T L divided by the total number of words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 154, |
|
"text": "(i)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Methods", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We note that after a row of T g is compared with T L as the target of coverage, the found bestmatching row in T L is not removed. This approach is better than the alternative approach of greedily removing the best-matching row, since the latter would give an accuracy result that depends on the row ordering in T g . Additionally we note that the data generation process assures that the rows of T g each contain 10 distinct words. For this reason, keeping the found best-matching row in T L in each step entails no problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Methods", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this section, we run RRT-VAE on 20NG and the synthetic datasets to explore its performance under different parameter settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Influence of Parameter Settings", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Prior settings. Prior settings are claimed to have a significant influence on model performance (Wallach et al., 2009) . In this experiment, we run RRT-VAE on the 20NG dataset using four symmetric Dirichlet prior settings [0.02,0.2,1.0,2.0]. The number of topics is set to 50 and \u03bb is set to 0.01 in all experiments. We use \u2206 = 10 \u221210 as the rounding precision such that accurate Dirichlet samples can be drawn. As shown in Figure 1 (left) , when using a larger prior parameter (1 or larger), the training loss drops more rapidly and converges to a lower value. Table 2 reports the corresponding testing results. We found that when using a smaller prior setting, RRT-VAE tends to achieve a better topic coherence (NPMI) while sacrificing some performance on perplexity. One possible explanation of these phenomena is that a smaller prior setting (lower than 1) encourages the encoder network to sample a sparser topic distribution \u03b8. The sparsity of \u03b8 in turn makes it easier for the model to assign a very small probability on some existing words in a document and thus increases the training loss and perplexity. To verify this conjecture, we construct a simple method to measure sparsity: after the training, we randomly feed 1000 training samples into the encoder network and obtain 1000 topic distribution vectors {\u03b8 i } 1000 i=1 . For each \u03b8 i , we calculate the difference between its largest and smallest probability value and then average these differences over the 1000 samples. Clearly, a larger difference value indicates a sparser \u03b8, e.g. the maximum difference 1 is achieved by a one-hot vector. From the sparsity measurements in Table 2 , we see that a smaller prior setting causes the encoder to generate sparser topic distribution vectors, which in turn hinders the convergence of the training loss to a lower value and hence causes a higher perplexity. On the other hand, sparser topic distributions tend to improve NPMI, although this improvement is slight. \u03bb settings. The \"gradient control\" parameter \u03bb in RRT adjusts the strength of the gradient signal back-propagated to the encoder while also influencing the variance of the Monte Carlo gradient estimator. Figure 1 (right) and Table 3 report the influence of different \u03bb settings on the model performance, where the number of topics is set to 50 and the prior is set to 1. As shown, when \u03bb is set too small (e.g. \u03bb = 0.001), the training loss fails to converge to a lower value, resulting in a higher perplexity and worse NPMI. The best performance is achieved when \u03bb is set between around 0.01 and 0.005. Different \u03bb settings can bring similar training performances but different testing results. For example, when \u03bb is set to 0.1 and 0.01, the corresponding training performances are very similar (see Figure 1 (right) , blue and grey dash line), however, \u03bb = 0.01 achieves a better perplexity and NPMI result. In these experiments, \u03bb is set to 0.01, the number of topics is set to 50.", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 118, |
|
"text": "(Wallach et al., 2009)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 424, |
|
"end": 439, |
|
"text": "Figure 1 (left)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1644, |
|
"end": 1651, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 2181, |
|
"end": 2197, |
|
"text": "Figure 1 (right)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 2202, |
|
"end": 2209, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 2779, |
|
"end": 2795, |
|
"text": "Figure 1 (right)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results on 20NG", |
|
"sec_num": "5.3.1" |
|
}, |
|
{ |
|
"text": "Influence of the rounding precision \u2206. A main concern of RRT is that the induced discontinuities may cause training to be unstable. As proved in Section 3, this discontinuity actually causes little impact on the stability of training. We substantiate this conclusion in Figure 2 (a) by plotting the training loss curves of RRT-VAE under different \u2206 settings. As shown, all the training losses converge stably when using different \u2206. This demonstrates that the precision of the rounding operation has little impact on the training stability. The influences of \u2206 on perplexity and NPMI are also modest. As shown in Figure 2 (b) and (c), the resulting perplexities and NPMIs are in general insensitive to the \u2206 settings. From Figure 2 (b) and (d), it can also be observed that the perplexity of RRT-VAE has correlation with the sparsity. When \u2206 changes from 1 to 10 \u221210 , the sparsity value of\u03b1 = 0.02 (green line in Figure 2 (d) ) jumps from 0.059 to around 0.55. 5 The corresponding perplexity value (green line in Figure 2 (b)) also increases from 1078 to around 1400. In contrast, the sparsity levels of\u03b1 = 1.0 and \u03b1 = 2.0 remain unchanged. Their corresponding perplexities also stay at the same levels.", |
|
"cite_spans": [ |
|
{ |
|
"start": 962, |
|
"end": 963, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 270, |
|
"end": 278, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 613, |
|
"end": 621, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 718, |
|
"end": 731, |
|
"text": "From Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 914, |
|
"end": 926, |
|
"text": "Figure 2 (d)", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1014, |
|
"end": 1022, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results on 20NG", |
|
"sec_num": "5.3.1" |
|
}, |
|
{ |
|
"text": "Our experiments on the synthetic datasets again demonstrate that the rounding precision has little impact on the training stability. Figure 3 (left) exhibits how different \u2206 settings influence the training performance of RRT-VAE when \u03b1 g = 0.01 (the results of \u03b1 g = 0.05 and 0.1 are shown in Appendix A.2). As shown, all the training losses decrease stably, although a higher \u2206 setting hinders the loss converging to a lower value. Figure 3 (right) reports how different \u2206 settings influence the recovery accuracy of RRT-VAE on three synthetic datasets. It can be seen that a smaller \u2206 achieves a better performance. Specifically, when \u2206 = 1, the training loss remains at a high value and the corresponding recovery accuracy is lower than 60%, indicating that RRT-VAE fails to fit the true data distribution. In contrast, when \u2206 = 10 \u221210 , RRT-VAE fits the data well: the training loss drops rapidly and converges to a much lower value; the resulting recovery accuracy reaches up to 90%. Recall that on 20NG, both the training and testing performances are insensitive to the rounding precision. In contrast, on synthetic datasets, the rounding precision has a significant influence. This phenomenon is reasonable, since the synthetic data strictly satisfies the LDA generative process. A higher \u2206 setting causes the rounded distribution deviate the Dirichlet posterior, thereby interfering with the fitting of the data. On the other hand, the underlying distribution of the real-world data does not strictly conform to the LDA assumption. This deviation, therefore, has little impact on fitting the data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 141, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 442, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results on Synthetic datasets", |
|
"sec_num": "5.3.2" |
|
}, |
|
{ |
|
"text": "In this section, we compare RRT-VAE with other existing topic models on both real-world datasets and synthetic datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with Other Models", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "On real-world datasets, we do not compare Online LDA, since the training of Online LDA on large datasets is extremely time consuming and Online LDA fails to obtain any good results after being trained for a long time (results of Online LDA on 20NG are shown in Appendix B.2). For ProdLDA, DirVAE and RRT-VAE, we tune the prior parameter from [0.02,0.2,1.0]. The best \u03bb settings of RRT-VAE for each dataset are shown in Table 4 . All the compared models adopt the same prod decoder of (5) on the real-world datasets. margherita grimaldi pizzeria pepperoni sbarro brooklyn bianco mozza spinato concours udon ichiza monta tokyo chaya agedashi saigon chinatown gyoza yaki croissant decaf oatmeal scone coffe granola almond pastri latt muffin hue bo pho vietnames viet banh lemongrass vietnam mi basil sportsbook mandalay ronin kiki miyagi puck bachi shogun fatburg oxtail heighten punctuat suppl amidst juxtapos conscious onward revel evok gleam ewwww saliva kneel cock toothless broom discust demerit surveil sill wan non asian pan asian pak taipei totti hotpot hai sift empty hand marshall stuffer overstock spree reorgan sweatshirt store preach outbreak heartfelt pois raymond uplift caregiv worship charismat deathli buger haystack stripburg in and out quadrupl deli fukuburg fries food poison ambienc atmospher awsom bedienungen cafeteria defiantli chipotl slowest oldtown boozer after work carly grapevin fiver meet up hang tombston pokey pizza but peroni numero pizzaria pizza n nth insipid banal nil nla disposit st laurent hyper extraordinair procur store sale housewar homegood inventori brows shelv thrift shopper stock sashimi eel tempura nigiri yellowtail ponzu sushi edamam tuna wasabi dr doctor exam physician nurs physician obgyn urgent clinic medic airport plane flight baggag mccarran tsa passeng megabu shuttl airlin workout instructor zumba yoga class bike gym crossfit fairway paintbal The experimental results are shown in Table 5 and 6. It can be seen that on the small and medium size datasets (20NG and AGNews), the performance of DirVAE levels with RRT-VAE, while on the large datasets (RCV1-v2, DBpedia and Yelp), the NPMI of RRT-VAE is significantly better than all the other compared models. Although the perplexity of NVDM is better than RRT-VAE, this gap is small. On the other hand, on NPMI, RRT-VAE outperforms NVDM by a very large margin. In fact, it has been demonstrated that perplexity is not necessarily a good metric for evaluating the quality of learned topics (Newman et al., 2010) . Its correlation to the quality of the learned topics is questionable 6 (Chang et al., 2009) . With these considerations, we argue that RRT-VAE is overall superior to other compared models. Table 7 exhibits the extracted topic words of different models, where each line of the words corresponds to a certain topic. We see that the words extracted by RRT-VAE (the bottom cell of Table 7) are much more interpretable, from which it can 6 In general, perplexity measures the goodness-of-fit of data to a learned model under the maximum likelihood principle. This makes it a valid metric for evaluation when the learning objective (as in the considered models) aims at maximizing the data likelihood. On the other hand, we note that traditionally in all VAE-LDA models (e.g., those compared in this paper) and also in this paper, perplexity is in fact approximately computed using the evidence lower bound (ELBO) of the data likelihood, since exact computation of the data likelihood is usually intractable. But the perplexity computed this way aggregates the overall effects of both the learned decoder (i.e., the \u03b2 matrix) and the learned encoder. Therefore it does not provide a direct evaluation of the learned word distributions in the \u03b2 matrix. This problem is overcomed by the additional NPMI measure, which is computed directly from the \u03b2 matrix and serves as a more indicative quality measurement of the learned topics. be easily inferred that the corresponding topics are \"trade\", \"Japanese food\", \"medical\" and \"fitness\". But it is not the case for the other models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 2497, |
|
"end": 2518, |
|
"text": "(Newman et al., 2010)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 2592, |
|
"end": 2612, |
|
"text": "(Chang et al., 2009)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 2955, |
|
"end": 2956, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 419, |
|
"end": 426, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 1941, |
|
"end": 1948, |
|
"text": "Table 5", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 2710, |
|
"end": 2717, |
|
"text": "Table 7", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 2898, |
|
"end": 2907, |
|
"text": "Table 7)", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Real-world datasets", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We compare RRT-VAE with Online LDA, ProdLDA and DirVAE on three synthetic datasets which are generated by different Dirichlet parameters. The compared three neural topic models adopt the same standard decoder of (4). Since NVDM is a pure Gaussian VAE model without any approximation of Dirichlet distributions, it is not compared in this experiment. Table 8 reports the recovery accuracy of the compared models. The experimental results strongly demonstrate the ability of RRT-VAE as an inference method to learn LDA. Specifically, RRT-VAE levels with Online LDA on recovery accuracy, while it enjoys a much higher computational efficiency. Among three neural topic models, RRT-VAE clearly outperforms the others. Appendix A.3 shows an example of the ground truth matrix T g and the matrix recovered by RRT-VAE. Table 8 : Recovery accuracy of four topic models on synthetic datasets generated by three different \u03b1 g settings. For RRT-VAE, \u03bb is set to 1; \u2206 is set to 10 \u221210 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 350, |
|
"end": 357, |
|
"text": "Table 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 812, |
|
"end": 819, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Synthetic datasets", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, rounded reparameterization trick, or RRT, is shown as an effective and efficient reparameterization method for Dirichlet distributions in the context of learning VAE based LDA models. In fact, the applicability of RRT can be generalized beyond Dirichlet distributions. This is because any distribution can be reparameterized to an \"RRT form\" as long as a sampling algorithm exists for that distribution. Thus it will be interesting to investigate the performance of RRT in other applications of VAE beyond topic modelling. Successes in these investigations will certainly extend the applicability of VAE to much broader application domains and model families.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding Remarks", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "A.1 Topic recovery accuracy using prod decoder Table 9 : Topic words recovery accuracy of three neural topic models on synthetic datasets generated with three different \u03b1 g settings. The models adopt the same prod decoder structure. For RRT-VAE, \u03bb is set to 1; \u2206 is set to 10 \u221210 . Table 9 reports the topic recovery accuracy of three neural topic models using the prod decoder. Compared to Table 8 , it can be seen that the standard decoder significantly outperforms the prod decoder on the synthetic datasets. Table 10 exhibits an example of the ground truth topic word matrix T g used in our experiments and Table 10 : Left: the ground truth topic word matrix T g ; Right: a matrix T L learned by RRT-VAE. Note that the rows of T L are arbitrarily ordered. For example, the first and second rows of T g individually correspond to the 11th and 14th rows of T L (as shown in bold). As shown in Table 11 , when using the standard decoder on the 20NG dataset, RRT-VAE appears to extract many repetitive topic words. B.3 Topic words extracted by RRT-VAE Table 13 exhibits the topic words extracted by RRT-VAE from four real-world datasets (20NG, AG-News, RCV1-v2 and DBpedia), where each line of the words corresponds to a certain topic.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 54, |
|
"text": "Table 9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 282, |
|
"end": 289, |
|
"text": "Table 9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 398, |
|
"text": "Table 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 512, |
|
"end": 520, |
|
"text": "Table 10", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 611, |
|
"end": 619, |
|
"text": "Table 10", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 895, |
|
"end": 903, |
|
"text": "Table 11", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 1052, |
|
"end": 1060, |
|
"text": "Table 13", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Additional Results on Synthetic Datasets", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "health medical patient disease medicine estimate hospital care service coverage violent gun crime handgun usa criminal uk homicide defend firearm constitution senate amendment representative states president extend congress militia bear homosexual male sexual man statistics percent rsa number gay behavior fuel moon cool lunar air launch heat stage orbit cold guilti conspiraci ghraib martha milosev enron prison yugoslav torture sentence ansari spaceshipon genesi space hubbl parachut spacecraft nasa station astronaut docomo nokia vodafon phone motorola blackberri ip mobil treo mmo kill explod injur dead quak typhoon peopl jakarta bomb landslide mice skeleton supercompute gene genetic stem clone ancestor scientist speci thriv lifestyl shop museum flock fame cultur tast dream ancient desktop access network internet digit modem intranet download voice compute durum flood moisture disaster wheat grain hrw canol sorghum crop detain troop gunfire violent policeman military siege dozen terror embass attorney counsel felon lawsuit jury testif improp hear conspir guilt paperback reprint book republish young adult isbn author locu scholast desktop server intel web bas software device microsoft applic uav clarinet bassist guitarist drummer banjo violin guitar drum saxophon keyboardist airway airport iata airlin icao brokerag telecommun exchang asset financi ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.3 Recovered topic words", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Code will be available at https://github.com/ rzTian/RRT-VAE/tree/main", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For 20NG and RCV1-v2, we use the datasets provided by https://github.com/ysmiao/nvdm 4 http://groups.di.unipi.it/ gulli/AG corpus of news articles. html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since the sparsity we define is computed from the randomly sampled \u03b8, it is inherently stochastic due to randomness in \u03b8. Thus a small fluctuation of the computed sparsity value needs not to indicate a true difference of sparsity levels. For example, on the green line ofFigure 2 (d), the sparsity values of \u2206 = 0.01 and \u2206 = 10 \u221210 are different, but the difference is not large enough to suggest that the two models have different sparsity levels; such a difference is primarily due to stochastic irregularity in our sparsity computation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "On-line lda: Adaptive topic models for mining text streams with applications to topic detection and tracking", |
|
"authors": [ |
|
{ |
|
"first": "Loulwah", |
|
"middle": [], |
|
"last": "Alsumait", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Barbar\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlotta", |
|
"middle": [], |
|
"last": "Domeniconi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "eighth IEEE international conference on data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Loulwah AlSumait, Daniel Barbar\u00e1, and Carlotta Domeniconi. 2008. On-line lda: Adaptive topic models for mining text streams with applications to topic detection and tracking. In 2008 eighth IEEE in- ternational conference on data mining, pages 3-12. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Dynamic topic models", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John D", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 23rd international conference on Machine learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "113--120", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M Blei and John D Lafferty. 2006. Dynamic topic models. In Proceedings of the 23rd interna- tional conference on Machine learning, pages 113- 120.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Reading tea leaves: How humans interpret topic models", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Gerrish", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "288--296", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L Boyd-Graber, and David M Blei. 2009. Reading tea leaves: How humans interpret topic models. In Advances in neural information processing systems, pages 288-296.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Journal of the American society for information science", |
|
"authors": [ |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Deerwester", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Susan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Dumais", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Furnas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Landauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Harshman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "391--407", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Scott Deerwester, Susan T Dumais, George W Fur- nas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Jour- nal of the American society for information science, 41(6):391-407.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Implicit reparameterization gradients", |
|
"authors": [ |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Figurnov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shakir", |
|
"middle": [], |
|
"last": "Mohamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andriy", |
|
"middle": [], |
|
"last": "Mnih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "441--452", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikhail Figurnov, Shakir Mohamed, and Andriy Mnih. 2018. Implicit reparameterization gradients. In Ad- vances in Neural Information Processing Systems, pages 441-452.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Likelihood ratio gradient estimation for stochastic systems", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Peter W Glynn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Communications of the ACM", |
|
"volume": "33", |
|
"issue": "10", |
|
"pages": "75--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter W Glynn. 1990. Likelihood ratio gradient estima- tion for stochastic systems. Communications of the ACM, 33(10):75-84.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Variance reduction techniques for gradient estimates in reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Evan", |
|
"middle": [], |
|
"last": "Greensmith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Bartlett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baxter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "1471--1530", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evan Greensmith, Peter L Bartlett, and Jonathan Bax- ter. 2004. Variance reduction techniques for gradi- ent estimates in reinforcement learning. Journal of Machine Learning Research, 5(Nov):1471-1530.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Finding scientific topics", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Steyvers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the National academy of Sciences", |
|
"volume": "101", |
|
"issue": "1", |
|
"pages": "5228--5235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas L Griffiths and Mark Steyvers. 2004. Find- ing scientific topics. Proceedings of the National academy of Sciences, 101(suppl 1):5228-5235.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Online learning for latent dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Hoffman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Francis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Bach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "856--864", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Hoffman, Francis R Bach, and David M Blei. 2010. Online learning for latent dirichlet allocation. In advances in neural information processing sys- tems, pages 856-864.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Probabilistic latent semantic indexing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Hofmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "50--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual inter- national ACM SIGIR conference on Research and development in information retrieval, pages 50-57.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Sungrae Park, and Il-Chul Moon", |
|
"authors": [ |
|
{ |
|
"first": "Weonyoung", |
|
"middle": [], |
|
"last": "Joo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wonsung", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Dirichlet variational autoencoder", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1901.02739" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weonyoung Joo, Wonsung Lee, Sungrae Park, and Il- Chul Moon. 2019. Dirichlet variational autoencoder. arXiv preprint arXiv:1901.02739.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Autoencoding variational bayes", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Welling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1312.6114" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Max Welling. 2013. Auto- encoding variational bayes. arXiv preprint arXiv:1312.6114.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Stochastic gradient variational bayes for gamma approximating distributions", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Knowles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1509.01631" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David A Knowles. 2015. Stochastic gradient varia- tional bayes for gamma approximating distributions. arXiv preprint arXiv:1509.01631.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Jey Han Lau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Newman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "530--539", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, pages 530-539.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Dbpedia-a large-scale, multilingual knowledge base extracted from wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Isele", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Jakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anja", |
|
"middle": [], |
|
"last": "Jentzsch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitris", |
|
"middle": [], |
|
"last": "Kontokostas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pablo", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Mendes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Hellmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Morsey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Van Kleef", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S\u00f6ren", |
|
"middle": [], |
|
"last": "Auer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Semantic Web", |
|
"volume": "6", |
|
"issue": "2", |
|
"pages": "167--195", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, S\u00f6ren Auer, et al. 2015. Dbpedia-a large-scale, mul- tilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167-195.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Improving lda topic models for microblogs via tweet pooling and automatic labeling", |
|
"authors": [ |
|
{ |
|
"first": "Rishabh", |
|
"middle": [], |
|
"last": "Mehrotra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Sanner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wray", |
|
"middle": [], |
|
"last": "Buntine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lexing", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "889--892", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rishabh Mehrotra, Scott Sanner, Wray Buntine, and Lexing Xie. 2013. Improving lda topic models for microblogs via tweet pooling and automatic label- ing. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, pages 889-892.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Neural variational inference for text processing", |
|
"authors": [ |
|
{ |
|
"first": "Yishu", |
|
"middle": [], |
|
"last": "Miao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "International conference on machine learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1727--1736", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In Interna- tional conference on machine learning, pages 1727- 1736.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Reparameterization gradients through acceptance-rejection sampling algorithms", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Christian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Naesseth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Francisco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ruiz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Scott W Linderman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1610.05683" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christian A Naesseth, Francisco JR Ruiz, Scott W Lin- derman, and David M Blei. 2016. Reparameteri- zation gradients through acceptance-rejection sam- pling algorithms. arXiv preprint arXiv:1610.05683.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Automatic evaluation of topic coherence", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Newman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jey", |
|
"middle": [ |
|
"Han" |
|
], |
|
"last": "Lau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Grieser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human language technologies: The 2010 annual conference of the North American chapter of the association for computational linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "100--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Newman, Jey Han Lau, Karl Grieser, and Tim- othy Baldwin. 2010. Automatic evaluation of topic coherence. In Human language technologies: The 2010 annual conference of the North American chap- ter of the association for computational linguistics, pages 100-108. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Scikit-learn: Machine learning in python", |
|
"authors": [ |
|
{ |
|
"first": "Fabian", |
|
"middle": [], |
|
"last": "Pedregosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ga\u00ebl", |
|
"middle": [], |
|
"last": "Varoquaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Gramfort", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bertrand", |
|
"middle": [], |
|
"last": "Thirion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Grisel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathieu", |
|
"middle": [], |
|
"last": "Blondel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Prettenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Dubourg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of machine learning research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "The generalized reparameterization gradient", |
|
"authors": [ |
|
{ |
|
"first": "Michalis", |
|
"middle": [], |
|
"last": "Francisco R Ruiz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Titsias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Aueb", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "460--468", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Francisco R Ruiz, Michalis Titsias RC AUEB, and David Blei. 2016. The generalized reparameteriza- tion gradient. In Advances in neural information processing systems, pages 460-468.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Autoencoding variational inference for topic models", |
|
"authors": [ |
|
{ |
|
"first": "Akash", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Sutton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1703.01488" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Akash Srivastava and Charles Sutton. 2017. Autoen- coding variational inference for topic models. arXiv preprint arXiv:1703.01488.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Rethinking lda: Why priors matter", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hanna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wallach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mimno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mc-Callum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1973--1981", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hanna M Wallach, David M Mimno, and Andrew Mc- Callum. 2009. Rethinking lda: Why priors matter. In Advances in neural information processing sys- tems, pages 1973-1981.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Lda-based document models for ad-hoc retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Xing", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "178--185", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xing Wei and W Bruce Croft. 2006. Lda-based doc- ument models for ad-hoc retrieval. In Proceedings of the 29th annual international ACM SIGIR confer- ence on Research and development in information retrieval, pages 178-185.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ronald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Machine learning", |
|
"volume": "8", |
|
"issue": "3-4", |
|
"pages": "229--256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine learning, 8(3-4):229-256.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Character-level convolutional networks for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junbo", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [], |
|
"last": "Lecun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "649--657", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in neural information pro- cessing systems, pages 649-657.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "the corresponding recovered matrix T L learned by RRT-VAE. Note that the rows of T L are arbitrar", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "the corresponding recovered matrix T L learned by RRT-VAE. Note that the rows of T L are arbitrar-", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Training performance of RRT-VAE with different prior (left) and \u03bb settings (right).", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "(a) Training performance of RRT-VAE with different \u2206 settings; (b)-(d) perplexity, NPMI and sparsity of RRT-VAE with different \u2206 and prior\u03b1 settings.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "Training performances (left) and recovery accuracy (right) of RRT-VAE on a synthetic dataset (\u03b1 g = 0.01) with different \u2206 settings.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"text": "plots the training loss curves of RRT-VAE with different \u2206 settings on two synthetic datasets (\u03b1 = 0.05 and \u03b1 = 0.1). The curves perform similarly toFigure 3 (left).", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF5": { |
|
"text": "Training performances of RRT-VAE with different \u2206 settings. Left: \u03b1 g = 0.05; right: \u03b1 g = 0.1.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"3\">20NG AGNews RCV1-v2 DBpedia Yelp</td></tr><tr><td colspan=\"3\">#Train 11258 120000 794414 560000 560000</td></tr><tr><td>#Test 7487 7600</td><td>10000</td><td>70000 38000</td></tr><tr><td>#Vocab 1995 10630</td><td>10000</td><td>20000 20000</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "Evaluation results on RRT-VAE with different prior settings. Perplexity: lower is better; NPMI: higher is better; Sparsity: higher means sparser.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">\u03bb Settings 0.1 0.01 0.005 0.001</td></tr><tr><td colspan=\"2\">Perplexity 1004 951 978 1127</td></tr><tr><td>NPMI</td><td>0.221 0.243 0.271 0.160</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "Evaluation results of RRT-VAE with different \u03bb settings.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"text": "Optimal \u03bb settings of RRT-VAE for different datasets.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td/><td>NVDM</td><td>ProdLDA DirVAE RRT-VAE</td></tr><tr><td>20NG</td><td colspan=\"2\">773/0.152 987/0.262 970/0.277 978/0.271</td></tr><tr><td colspan=\"3\">AGNews 1067/0.086 1457/0.196 1573/0.287 1318/0.287</td></tr><tr><td colspan=\"3\">RCV1-v2 511/0.121 623/0.164 746/0.137 623/0.262</td></tr><tr><td colspan=\"3\">DBPedia 617/0.093 1065/0.101 1018/0.102 851/0.227</td></tr><tr><td>Yelp</td><td colspan=\"2\">1003/0.120 1244/0.064 1353/0.068 1251/0.266</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"text": "Perplexity/NPMI of the compared topic models on five datasets. The number of topic is set to 50.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td/><td>NVDM</td><td>ProdLDA DirVAE RRT-VAE</td></tr><tr><td>20NG</td><td colspan=\"2\">1167/0.140 1050/0.172 973/0.215 997/0.214</td></tr><tr><td colspan=\"3\">AGNews 1160/0.056 2434/0.024 1523/0.156 1914/0.226</td></tr><tr><td colspan=\"3\">RCV1-v2 482/0.107 604/0.085 706/0.045 669/0.254</td></tr><tr><td colspan=\"3\">DBPedia 597/0.055 997/0.113 1028/0.041 884/0.161</td></tr><tr><td>Yelp</td><td colspan=\"2\">996/0.069 1272/0.072 1259/0.044 1325/0.174</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"text": "Topic words extracted from the Yelp dataset.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>From top to bottom, each cell is extracted by NVDM,</td></tr><tr><td>ProdLDA, DirVAE and RRT-VAE. More examples are</td></tr><tr><td>exhibited in Appendix B.3.</td></tr></table>" |
|
}, |
|
"TABREF11": { |
|
"html": null, |
|
"text": "B Additional Results on Real-world Datasets B.1 Repetitive words write article one get know like think say go use write article get one know like use think say go get go like write make people article insurance tax one write article one get use like think know go say know thanks please anyone write get email article post like", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF12": { |
|
"html": null, |
|
"text": "The standard decoder appears to extract many repetitive words on 20NG.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF13": { |
|
"html": null, |
|
"text": "B.2 Performance of Online LDA on 20NG", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">Perplexity NPMI</td></tr><tr><td>50 topics</td><td>1183</td><td>0.181</td></tr><tr><td>200 topics</td><td>2728</td><td>0.162</td></tr></table>" |
|
}, |
|
"TABREF14": { |
|
"html": null, |
|
"text": "The experimental results of Online LDA on the 20NG dataset.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF15": { |
|
"html": null, |
|
"text": "Topic words extracted by RRT-VAE from four different datasets. From top to bottom, each cell is extracted from 20NG, AGNews, RCV1-v2 and DBpedia.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |