ACL-OCL / Base_JSON /prefixS /json /spnlp /2022.spnlp-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:45.275768Z"
},
"title": "A Joint Learning Approach for Semi-supervised Neural Topic Modeling",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Chiu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {
"settlement": "Cambridge",
"region": "MA"
}
},
"email": ""
},
{
"first": "Rajat",
"middle": [],
"last": "Mittal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {
"settlement": "Cambridge",
"region": "MA"
}
},
"email": ""
},
{
"first": "Neehal",
"middle": [],
"last": "Tumma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {
"settlement": "Cambridge",
"region": "MA"
}
},
"email": ""
},
{
"first": "Abhishek",
"middle": [],
"last": "Sharma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {
"settlement": "Cambridge",
"region": "MA"
}
},
"email": ""
},
{
"first": "Finale",
"middle": [],
"last": "Doshi-Velez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {
"settlement": "Cambridge",
"region": "MA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Topic models are some of the most popular ways to represent textual data in an interpretable manner. Recently, advances in deep generative models, specifically auto-encoding variational Bayes (AEVB), have led to the introduction of unsupervised neural topic models, which leverage deep generative models as opposed to traditional statistics-based topic models. We extend upon these neural topic models by introducing the Label-Indexed Neural Topic Model (LI-NTM), which is, to the extent of our knowledge, the first effective upstream semisupervised neural topic model. We find that LI-NTM outperforms existing neural topic models in document reconstruction benchmarks, with the most notable results in low labeled data regimes and for data-sets with informative labels; furthermore, our jointly learned classifier outperforms baseline classifiers in ablation studies.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Topic models are some of the most popular ways to represent textual data in an interpretable manner. Recently, advances in deep generative models, specifically auto-encoding variational Bayes (AEVB), have led to the introduction of unsupervised neural topic models, which leverage deep generative models as opposed to traditional statistics-based topic models. We extend upon these neural topic models by introducing the Label-Indexed Neural Topic Model (LI-NTM), which is, to the extent of our knowledge, the first effective upstream semisupervised neural topic model. We find that LI-NTM outperforms existing neural topic models in document reconstruction benchmarks, with the most notable results in low labeled data regimes and for data-sets with informative labels; furthermore, our jointly learned classifier outperforms baseline classifiers in ablation studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Topic models are one of the most widely used and studied text modeling techniques, both because of their intuitive generative process and interpretable results (Blei, 2012) . Though topic models are mostly used on textual data (Rosen-Zvi et al., 2012; Yan et al., 2013) , use cases have since expanded to areas such as genomics modeling (Liu et al., 2016) and molecular modeling (Schneider et al., 2017) .",
"cite_spans": [
{
"start": 160,
"end": 172,
"text": "(Blei, 2012)",
"ref_id": "BIBREF0"
},
{
"start": 227,
"end": 251,
"text": "(Rosen-Zvi et al., 2012;",
"ref_id": "BIBREF28"
},
{
"start": 252,
"end": 269,
"text": "Yan et al., 2013)",
"ref_id": "BIBREF34"
},
{
"start": 337,
"end": 355,
"text": "(Liu et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 379,
"end": 403,
"text": "(Schneider et al., 2017)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, neural topic models, which leverage deep generative models have been used successfully for learning these probabilistic models. A lot of this success is due to the development of variational autoencoders which allow for inference of intractable distributions over latent variables through a back-propagation over an inference network. Furthermore, recent research shows promising results for Neural Topic Models compared to traditional * Equal contribution topic models due to the added expressivity from neural representations; specifically, we see significant improvements in low data regimes (Srivastava and Sutton, 2017; Iwata, 2021) .",
"cite_spans": [
{
"start": 605,
"end": 634,
"text": "(Srivastava and Sutton, 2017;",
"ref_id": "BIBREF31"
},
{
"start": 635,
"end": 647,
"text": "Iwata, 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Joint learning of topics and other tasks have been researched in the past, specifically through supervised topic models (Blei and McAuliffe, 2010; Huh and Fienberg, 2012; Cao et al., 2015; Wang and Yang, 2020) . These works are centered around the idea of a prediction task using a topic model as a dimensionality reduction tool. Fundamentally, they follow a downstream task setting (Figure 1) , where the label is assumed to be generated from the latent variable (topics). On the other hand, an upstream setting would be when the input (document) is generated from a combination of the latent variable (topics) and label, which has the benefit of better directly modeling how the label affects the document, resulting in topic with additional information being injected from the label information. Upstream variants of supervised topic models are much less common, with, to the extent of our knowledge, no neural architectures to this date. (Ramage et al., 2009; Lacoste-Julien et al., 2008) .",
"cite_spans": [
{
"start": 120,
"end": 146,
"text": "(Blei and McAuliffe, 2010;",
"ref_id": "BIBREF3"
},
{
"start": 147,
"end": 170,
"text": "Huh and Fienberg, 2012;",
"ref_id": "BIBREF12"
},
{
"start": 171,
"end": 188,
"text": "Cao et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 189,
"end": 209,
"text": "Wang and Yang, 2020)",
"ref_id": "BIBREF32"
},
{
"start": 942,
"end": 963,
"text": "(Ramage et al., 2009;",
"ref_id": "BIBREF25"
},
{
"start": 964,
"end": 992,
"text": "Lacoste-Julien et al., 2008)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 383,
"end": 393,
"text": "(Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our model, the Label-Indexed Neural Topic Model (LI-NTM) stands uniquely with respect to all existing topic models. We combine the benefits of an upstream generative processes (Figure 1 ), label-indexed topics, and a topic model capable of semi-supervised learning and neural topic modeling to jointly learn a topic model and label classifier. Our main contributions are:",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 185,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. The introduction of the first upstream semisupervised neural topic model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A label-indexed topic model that allows more cohesive and diverse topics by allowing the label of a document to supervise the learned topics in a semi-supervised manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "3. A joint training framework that allows for users to tune the trade-off between document classifier and topic quality which results in a classifier that outperforms same classifier trained in an isolated setting for certain hyperparameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "2 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "Most past work in neural topic models focused on designing inference networks with better model specification in the unsupervised setting. One line of recent research attempts to improve topic model performance by modifying the inference network through changes to the topic priors or regularization over the latent space (Miao et al., 2016; Srivastava and Sutton, 2017; Nan et al., 2019) . Another line of research looks towards incorporating the expressivity of word embeddings to topic models (Dieng et al., 2019a,b) . In contrast to existing work on neural topic models, our approach does not mainly focus on model specification; rather, we create a broader architecture into which neural topic models of all specifications can be trained in an upstream, semisupervised setting. We believe that our architecture will enable existing neural topic models to be used in a wider range of real-word scenarios where we leverage labeled data alongside unlabeled data and use the knowledge present in document labels to further supervise topic models. Moreover, by directly tying our topic distributions to the labels through label-indexing, we create topics that are specific to labels, making these topics more interpretable as users are directly able to glean what types of documents each of the topics are summarizing.",
"cite_spans": [
{
"start": 322,
"end": 341,
"text": "(Miao et al., 2016;",
"ref_id": "BIBREF22"
},
{
"start": 342,
"end": 370,
"text": "Srivastava and Sutton, 2017;",
"ref_id": "BIBREF31"
},
{
"start": 371,
"end": 388,
"text": "Nan et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 496,
"end": 519,
"text": "(Dieng et al., 2019a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Topic Models",
"sec_num": "2.1"
},
{
"text": "Most supervised topic models follow the downstream supervised framework introduced in s-LDA (Blei and McAuliffe, 2010) . This framework assumes a two-stage setting in which a topic model is trained and then a predictive model for the document labels is trained independently of the topic model. Neural topic models following this framework have also been developed, with the predictive model being a discriminative layer attached to the learned topics, essentially treating topic modeling as a dimensionality reduction tool (Wang and Yang, 2020; Cao et al., 2015; Huh and Fienberg, 2012) .",
"cite_spans": [
{
"start": 92,
"end": 118,
"text": "(Blei and McAuliffe, 2010)",
"ref_id": "BIBREF3"
},
{
"start": 524,
"end": 545,
"text": "(Wang and Yang, 2020;",
"ref_id": "BIBREF32"
},
{
"start": 546,
"end": 563,
"text": "Cao et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 564,
"end": 587,
"text": "Huh and Fienberg, 2012)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Downstream Supervised Topic Models",
"sec_num": "2.2"
},
{
"text": "In contrast to existing work, LI-NTM is an upstream generative model (Figure 2 lowing a prediction-constrained framework. The upstream setting allows us to implicitly train our classifier and topic model in a one-stage setting that is end-to-end. This has the benefit of allowing us to tune the trade-off between our classifier and topic model performance in a predictionconstrained framework, which has been shown to achieve better empirical results when latent variable models are used as a dimensionality reduction tool (Hughes et al., 2018; Sharma et al., 2021) . Furthermore, the upstream setting allows us to introduce the document label classifier as a latent variable, enabling our model to work in semisupervised settings.",
"cite_spans": [
{
"start": 523,
"end": 544,
"text": "(Hughes et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 545,
"end": 565,
"text": "Sharma et al., 2021)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 69,
"end": 78,
"text": "(Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Downstream Supervised Topic Models",
"sec_num": "2.2"
},
{
"text": "LI-NTM extends upon two core ideas: Latent Dirichlet Allocation (LDA) and deep generative models. For the rest of the paper, we assume a setting where we have a document corpus of D documents, a vocabulary with V unique words, and each document having a label from the L possible labels. Furthermore let us represent w dn as the n-th word in the d-th document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "LDA is a probabilistic generative model for topic modeling (Blei et al., 2003; Blei and McAuliffe, 2010) . Through the process of estimation and inference, LDA learns K topics \u03b2 1:K . The generative process of LDA posits that each document is a mixture of topics with the topics being global to the entire corpus. For each document, the generative process is listed below: Figure 2 : Generative Process for LI-NTM: The label y indexes into our label-topic-word matrix \u03b2, which is \"upstream\" of the observed words in the document w.",
"cite_spans": [
{
"start": 59,
"end": 78,
"text": "(Blei et al., 2003;",
"ref_id": null
},
{
"start": 79,
"end": 104,
"text": "Blei and McAuliffe, 2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 373,
"end": 381,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Latent Dirichlet Allocation (LDA)",
"sec_num": "3.1"
},
{
"text": "1. Draw topic proportions \u03b8 d \u223c Dirichlet(\u03b1 \u03b8 ) \u03b1 \u03b8 z \u03b2 y w N M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Dirichlet Allocation (LDA)",
"sec_num": "3.1"
},
{
"text": "2. For each word w in document:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Dirichlet Allocation (LDA)",
"sec_num": "3.1"
},
{
"text": "(a) Draw topic assignment z dn \u223c Cat(\u03b8 d ) (b) Draw word w dn \u223c Cat(\u03b2z dn ) 3. Draw responses y|z 1:N , \u03b7, \u03c3 2 \u223c N (\u03b7 Tz , \u03c3 2 ) (if supervised) wherez := 1 N N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Dirichlet Allocation (LDA)",
"sec_num": "3.1"
},
{
"text": "z n and the parameters \u03b7, \u03c3 2 are estimated during inference. \u03b1 \u03b8 is a hyperparameter that serves as a prior for topic mixture proportions. In addition we also have hyperparameter \u03b1 \u03b2 that we use to place a dirichlet prior on our topics, \u03b2 k \u223c Dirichlet(\u03b1 \u03b2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Dirichlet Allocation (LDA)",
"sec_num": "3.1"
},
{
"text": "Deep Generative Models serve as the bridge between probabilistic models and neural networks. Specifically, deep generative models treat the parameters of distributions within probabilistic models as outputs of neural networks. Deep generative models fundamentally work because of the re-parameterization trick that allows for backpropogation through Monte-Carlo samples of distributions from the location-scale family. Specifically, for any distribution g(\u2022) from the location-scale family, we have that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Generative Models",
"sec_num": "3.2"
},
{
"text": "z \u223c g(\u00b5, \u03c3 2 ) \u21d0\u21d2 z = \u00b5 + \u03c3 \u2022 \u03f5, \u03f5 \u223c g(0, 1) thus allowing differentiation with respect to \u00b5, \u03c3 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Generative Models",
"sec_num": "3.2"
},
{
"text": "The Variational Auto-encoder is the simplest deep generative model (Kingma and Welling, 2014) and it's generative process is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Generative Models",
"sec_num": "3.2"
},
{
"text": "p \u03b8 (x, z) = p \u03b8 (x|z)p(z) p \u03b8 (x|z) \u223c N (\u00b5 \u03b8 (z), \u03a3 \u03b8 (z)) p(z) \u223c N (0, I)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Generative Models",
"sec_num": "3.2"
},
{
"text": "where \u00b5 \u03b8 (z), \u03a3 \u03b8 (z) are both parameterized by neural networks with variational parameters \u03b8. Inference on a variational autoencoder is done through",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Generative Models",
"sec_num": "3.2"
},
{
"text": "x x \u03b2 \u03c0 \u00b5 \u03c3 Encoder Classifier Decoder \u03b8 q \u03bd (y|x) q \u03d5 (\u03b4|x) softmax((\u03b2 T \u03c0)\u03b8 d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Generative Models",
"sec_num": "3.2"
},
{
"text": "Figure 3: Architecture for LI-NTM in the un-labeled setting. y is used instead of obtaining a probability distribution \u03c0 from the classifier in the labeled setting. q(\u2022|x) are distributions parameterized by neural networks. Note that we can optimize the classifier, encoder, and decoder in one backwards pass.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Generative Models",
"sec_num": "3.2"
},
{
"text": "approximating the true posterior p(z|x) which is often intractable with an approximation q \u03d5 (z|x) that is parametrized by a neural network. The M2 model is the semi-supervised extension of the variational auto-encoder where the input is modeled as being generated by both a continuous latent variable z and the class label y as a latent variable . It follows the generative process below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Generative Models",
"sec_num": "3.2"
},
{
"text": "p \u03b8 (x, z, y) = p \u03b8 (x|y, z)p(y)p(z) p \u03b8 (x|y, z) \u223c N (\u00b5 \u03b8 (y, z), \u03a3 \u03b8 (y, z)) p(y) \u223c Cat(y|\u03c0) p(z) \u223c N (0, I)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Generative Models",
"sec_num": "3.2"
},
{
"text": "where \u03c0 is parameterizing the distribution on y and \u00b5 \u03b8 (y, z), \u03a3 \u03b8 (y, z) are both parameterized by neural networks. We then approximate the true posterior p(y, z|x) using by saying",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Generative Models",
"sec_num": "3.2"
},
{
"text": "p(y, z|x) \u2248 q \u03d5 (z|y, x)q \u03d5 (y|x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Generative Models",
"sec_num": "3.2"
},
{
"text": "where q \u03d5 (y|x) is a classifier that's used in the unlabeled case and q \u03d5 (z|y, x) is a neural network that takes in the true labels if available and the outputted labels from q \u03d5 (y|x) if unavailable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Generative Models",
"sec_num": "3.2"
},
{
"text": "LI-NTM is a neural topic model that leverages the labels y as a latent variable alongside the topic proportions \u03b8 in generating the document x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Label-Indexed Neural Topic Model",
"sec_num": "4"
},
{
"text": "Notationally, let us denote the bag of words representation of a document as x \u2208 R V and the one-hot encoded document label as y \u2208 R L . Furthermore, we denote our latent topic proportions as \u03b8 d \u2208 R K and our topics are represented using a three dimensional matrix \u03b2 \u2208 R L\u00d7K\u00d7V .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Label-Indexed Neural Topic Model",
"sec_num": "4"
},
{
"text": "Under the LI-NTM, the generative process (also depicted in Figure 2 ) of the d-th document x d is the following:",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Label-Indexed Neural Topic Model",
"sec_num": "4"
},
{
"text": "1. Draw topic proportions \u03b8 d \u223c LN (0, I) 2. Draw document label y d \u223c \u03c0 3. For each word w in document: (a) Draw topic assignment z dn \u223c Cat(\u03b8 d ) (b) Draw word w dn \u223c Cat(\u03b2 y d ,z dn ) In",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Label-Indexed Neural Topic Model",
"sec_num": "4"
},
{
"text": "Step 1, we draw from the Logistic-Normal LN (\u2022) to approximate the Dirichlet Distribution while remaining in the location-scale family necessary for re-parameterization (Blei et al., 2003) . This is done obtained through:",
"cite_spans": [
{
"start": 169,
"end": 188,
"text": "(Blei et al., 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Label-Indexed Neural Topic Model",
"sec_num": "4"
},
{
"text": "\u03b4 d \u223c N (0, I), \u03b8 d = sof tmax(\u03b4 d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Label-Indexed Neural Topic Model",
"sec_num": "4"
},
{
"text": "Note that since we sample from the Logistic-Normal, we do not require the Dirichlet prior hyperparameter \u03b1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Label-Indexed Neural Topic Model",
"sec_num": "4"
},
{
"text": "Step 2 is unique for LI-NTM , in the unlabeled case, we sample a label y d from \u03c0, which is the output of our classifier. In the labeled scenario, we skip step 2 and simply pass in the document label for our y d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Label-Indexed Neural Topic Model",
"sec_num": "4"
},
{
"text": "Step 3 is typical of traditional LDA, but one key difference is that in step 3b we also index by the \u03b2 by y d instead of just z dn . This step is motivated by how the M2 model extended variational autoencoders to a semi-supervised setting .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Label-Indexed Neural Topic Model",
"sec_num": "4"
},
{
"text": "A key contribution of our model is the idea of label-indexing. We introduce the supervision of the document labels by having different topics for different labels. Specifically, we have L \u00d7 K different topics and we denote the k-th topic for label l as the V dimensional vector, \u03b2 l,k . Under this setting, we can envision LI-NTM as running a separate LDA for each label once we index our corpus by document labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Label-Indexed Neural Topic Model",
"sec_num": "4"
},
{
"text": "Label-indexing allows us to effectively train our model in a semi-supervised setting. In the unlabeled data setting, our jointly-learned classifier, q \u03d5 (y|x), outputs a distribution over the labels, \u03c0. By computing the dot-product between \u03c0 and our topic matrix \u03b2, this allows us to partially index into each label's topic proportional to the classifier's confidence and update the topics based on the unlabeled examples we are currently training on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Label-Indexed Neural Topic Model",
"sec_num": "4"
},
{
"text": "Initialize model and variational parameters for (Jordan et al., 1999) . Furthermore, we amortize the loss to allow for joint learning of the classifier and the topic model.",
"cite_spans": [
{
"start": 48,
"end": 69,
"text": "(Jordan et al., 1999)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Topic Modeling with LI-NTM",
"sec_num": null
},
{
"text": "iteration i = 1, 2, . . . do for each document c in c 1 , c 2 , \u2022 \u2022 \u2022 , c d do Get normalized bag-of-word representa- tion x d Compute \u00b5 d = NN encoder (x d |\u03d5 \u00b5 ) Compute \u03a3 d = NN encoder (x d |\u03d5 \u03a3 ) if labeled then \u03c0 = y d else \u03c0 = NN classif ier (x d |\u03bd) end if Sample \u03b8 d \u223c LN (\u00b5 d , \u03a3 d ) for each word in the document do p(w dn |\u03b8 d , \u03c0) = softmax(\u03b2) T \u03c0\u03b8 d end",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Topic Modeling with LI-NTM",
"sec_num": null
},
{
"text": "We begin first by looking at a family of variational distributions q \u03d5 (\u03b4 We use this family of variational distributions alongside our classifier to lower-bound the marginal likelihood. The evidence lower bound (ELBO) is a function of model and variational parameters and provides a lower bound for the complete data log-likelihood. We derive two ELBObased loss functions: one for the labeled case and one for the unlabeled case and we compute a linear interpolation of the two for our overall loss function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Inference",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L u = D d=1 N d n=1 E q [log p(w dn |\u03b4 d , q \u03bd (y d |x d )] \u2212 \u03c4 KL(q \u03d5 (\u03b4 d |x d )||p(\u03b4 d ))",
"eq_num": "(1)"
}
],
"section": "Variational Inference",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L l = D d=1 N d n=1 E q [log p(w dn |\u03b4 d , q \u03bd (y d |x d )] \u2212 \u03c4 KL(q \u03d5 (\u03b4 d |x d )||p(\u03b4 d )) + \u03c1H(y d , q \u03bd (y d |x d ))",
"eq_num": "(2)"
}
],
"section": "Variational Inference",
"sec_num": "5.1"
},
{
"text": "where Equation 1 serves as our unlabeled loss and Equation 2 serves as our labeled loss. H (\u2022, \u2022) is the cross-entropy function. \u03c4 and \u03c1 are hyperparameters on the KL and cross-entropy terms in the loss respectively. These hyper-parameters are well motivated. \u03c4 is seen to be a hyper-parameter that tempers our posterior distribution over weights, which has been well-studied and shown to increase robustness to model mis-specification (Mandt et al., 2016; Wenzel et al., 2020) . Lower values \u03c4 would result in posterior distributions with higher probability densities around the modes of the posterior. Furthermore, the \u03c1 hyperparameter in our unlabeled loss is the core hyperparameter that makes our model fit the prediction-constrained framework, essentially allowing us to trade-off the between classifier and topic modeling performance (Hughes et al., 2018) . Increasing values of \u03c1 corresponds to emphasizing classifier performance over topic modeling performance.",
"cite_spans": [
{
"start": 436,
"end": 456,
"text": "(Mandt et al., 2016;",
"ref_id": "BIBREF20"
},
{
"start": 457,
"end": 477,
"text": "Wenzel et al., 2020)",
"ref_id": null
},
{
"start": 841,
"end": 862,
"text": "(Hughes et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 91,
"end": 97,
"text": "(\u2022, \u2022)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Variational Inference",
"sec_num": "5.1"
},
{
"text": "We treat our overall loss as a combination of our labeled and unlabeled loss with \u03bb \u2208 (0, 1) being a hyper-parameter weighing the labeled and unlabeled loss. \u03bb allows us weigh how heavily we want our unlabeled data to influence our models. Example cases where we may want high values of \u03bb are when we have poor classifier performance or a disproportionate amount of unlabeled data compared to label data, causing the unlabeled loss to completely outweigh the labeled loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Inference",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = \u03bbL l + (1 \u2212 \u03bb)L u",
"eq_num": "(3)"
}
],
"section": "Variational Inference",
"sec_num": "5.1"
},
{
"text": "We optimize our loss with respect to both the model and variational parameters and leverage the reparameterization trick to perform stochastic optimization (Kingma and Welling, 2014). The training procedure is shown in Algorithm 1 and a visualization of a forward pass is given in Figure 3 . This loss function allows us to jointly learn our classification and topic modeling elements and we hypothesize that the implicit regularization from joint learning will increase performance for both elements as seen in previous research studies (Zweig and Weinshall, 2013) .",
"cite_spans": [
{
"start": 538,
"end": 565,
"text": "(Zweig and Weinshall, 2013)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 281,
"end": 289,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Variational Inference",
"sec_num": "5.1"
},
{
"text": "We perform an empirical evaluation of LI-NTM with two corpora: a synthetic dataset and AG News.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6"
},
{
"text": "We compare our topic model to the Embedded Topic Model (ETM), which is the current state of the art neural topic model that leverages word embeddings alongside variational autoencoders for unsupervised topic modeling (Dieng et al., 2019a) . Further details about ETM are shown in the appendix (subsection A.2). Furthermore, our baseline for our jointly trained classifier is a classifier with the same architecture outside of our jointly trained setting.",
"cite_spans": [
{
"start": 217,
"end": 238,
"text": "(Dieng et al., 2019a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "6.1"
},
{
"text": "We constructed our synthetic data to evaluate LI-NTM in ideal and worst-case settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Dataset",
"sec_num": "6.2"
},
{
"text": "\u2022 Ideal Setting: An ideal setting for LI-NTM consists of a corpus with similar word distributions for documents with the same label and very dissimilar word distributions for documents with different labels",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Dataset",
"sec_num": "6.2"
},
{
"text": "\u2022 Worst Case Setting worst-case setting for LI-NTM consists of a corpus where the label has little to no correlation with the distribution of words in a document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Dataset",
"sec_num": "6.2"
},
{
"text": "Since the labels are a fundamental aspect of LI-NTM we wanted to investigate how robust LI-NTM is in a real-word setting, specifically looking at how robust it was to certain types of mis-labeled data points. By jointly training our classifier with our topic model, we hope that by properly trading off topic quality and classification quality, our model will be more robust to mis-labeled data since we are able to manually tune how much we want to depend on the data labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Dataset",
"sec_num": "6.2"
},
{
"text": "We use the same distributions to generate the documents for both the ideal and worst-case data. In particular, we consider a vocabulary with V = 20 words, and a task with L = 2 labels. Documents are generated from one of two distributions, D 1 and D 2 . D 1 generates documents which have many occurrences of the first 10 words in the vocabulary (and very few occurrences of the last 10 words), while D 2 does the opposite, generating documents which have many occurrences of the last 10 words in the vocabulary (and very few occurrences of the first 10 words). The distributions D 1 and D 2 have parameters which are generated randomly for each trial, although the shape of the distributions is largely the same from trial to trial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Dataset",
"sec_num": "6.2"
},
{
"text": "In the ideal case, the label corresponds directly to the distribution from which the document was generated. For the worst-case data, the label is 0 if the number of words in the document is an even number, and 1 otherwise, ensuring there is little to no correlation between label and word distributions in a document. Note that in our synthetic data experiments, all of the data is labeled. The effectiveness of LI-NTM in semi-supervised domains is evaluated in our AG News experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Dataset",
"sec_num": "6.2"
},
{
"text": "The AG News dataset is a collection of news articles collected from more than 2,000 news sources by ComeToMyHead, an academic news search engine. This dataset includes 118,000 training samples and 7,600 test samples. Each sample is a short text with a single four-class label (one of world, business, sports and science/technology).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AG News Dataset",
"sec_num": "6.3"
},
{
"text": "To evaluate our models, we used accuracy as a metric to gauge the quality of the classifier and perplexity to gauge the quality of the model as a whole. We opted to use perplexity as it is a measure for how well the model generalizes to unseen test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "6.4"
},
{
"text": "We used our synthetic dataset to examine the performance of LI-NTM relative to ETM in a setting where the label strongly partitions our dataset into subsets that have distinct topics to investigate the effect and robustness of label indexing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Data Experimental Results",
"sec_num": "7"
},
{
"text": "LI-NTM was trained on the fully labeled version of the both the ideal and worse case label synthetic dataset and ETM was trained on the same dataset with the label excluded, as ETM is a unsupervised method. We varied the number of topics in both LI- Figure 4 : Topic-word probability distribution visualization for LI-NTM on ideal case synthetic dataset with one topic per label. We observe that we learn topics that are strongly label partitioned.",
"cite_spans": [],
"ref_spans": [
{
"start": 250,
"end": 258,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synthetic Data Experimental Results",
"sec_num": "7"
},
{
"text": "NTM and ETM to explore realistic settings K = 2, 8 and the extreme setting K = 20.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Data Experimental Results",
"sec_num": "7"
},
{
"text": "Takeaway: More topics lead to better performance, especially when the label is uninformative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Number of Topics",
"sec_num": "7.1"
},
{
"text": "First, we note that as we increase the number of topics, the performance of LI-NTM on ideal case labels, LI-NTM on worst case labels, and ETM improves as shown in Table 1 . This is expected as having more topics gives the model the capacity to learn more diverse topic-word distributions which leads to an improved reconstruction. However, we note that LI-NTM trained on the worst-case labels benefits most from the increase in the number of topics.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Effect of Number of Topics",
"sec_num": "7.1"
},
{
"text": "Takeaway: Label Indexing is highly effective when labels partition the dataset well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informative Labels",
"sec_num": "7.2"
},
{
"text": "Next, we note that LI-NTM trained on the ideal case label synthetic dataset outperforms ETM with respect to perplexity (see Table 1 ). This result can be attributed to the fact that LI-NTM leverages label indexing to learn the label-topic-word distribution. Since the ideal case label version of the dataset was constructed such that the label strongly partitions the dataset into two groups (each of which has a very distinct topic-word distribution), and since we had perfect classifier accuracy (the ideal case label dataset was constructed such that the classification problem was trivial), LI-NTM is able to use the output Table 2 : Accuracies of classifier LI-NTM (V2) on ideal case and worst case labels. LI-NTM (V2) is trained only on worst-case labels but evaluated on both worst case and ideal case label test sets. Note that even though \u03b1 = 0 and the training set is only worst case labels, the reconstruction loss distantly supervises the classifier to learn the true ideal case labels.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 628,
"end": 635,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Informative Labels",
"sec_num": "7.2"
},
{
"text": "from the classifier to index into the topic-word distribution with 100% accuracy. If we denote the topic-word distribution corresponding to label 0 by \u03b2 0 and the topic-word distribution corresponding to label 1 by \u03b2 1 , we note that LI-NTM is able to leverage \u03b2 0 to specialize in generating the words for the documents corresponding to label 0 while using \u03b2 1 to specialize in generating the words for the documents corresponding to label 1 (see Figure 4) . Overall, this result suggests that LI-NTM performs well in settings when the dataset exhibits strong label partitioning.",
"cite_spans": [],
"ref_spans": [
{
"start": 448,
"end": 457,
"text": "Figure 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Informative Labels",
"sec_num": "7.2"
},
{
"text": "Takeaway: With proper hyperparameters, LI-NTM is able to achieve good topic model performance even when we have uninformative labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Labels",
"sec_num": "7.3"
},
{
"text": "We now move to examining the results produced by LI-NTM trained on the worst-case labels. In this data setting, we investigated the robustness of the LI-NTM architecture. Specifically, we looked at a worst-case dataset, where we have labels that are uninformative and are thus not good at partitioning the dataset into subsets that have distinct topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Labels",
"sec_num": "7.3"
},
{
"text": "In the worst-case setting, we define the following two instances of the LI-NTM model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Labels",
"sec_num": "7.3"
},
{
"text": "\u2022 LI-NTM (V1) This model refers to the normal (\u03c1 \u0338 = 0) version of the model trained in the worst case setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Labels",
"sec_num": "7.3"
},
{
"text": "\u2022 LI-NTM (V2) This model refers to a LI-NTM model with zero-ed out classification loss (\u03c1 = 0), essentially pushing the model to only accurately reconstruct the original data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Labels",
"sec_num": "7.3"
},
{
"text": "For LI-NTM (V1), we did see decreases in performance; namely, that V1 has a worse perplexity than both ETM and ideal case LI-NTM. This aligns with our expectation that having a label with very low correlation to the topic-word distributions in the document results in poor performance in LI-NTM. This can be attributed to the failure of LI-NTM to adequately label-index in cases where this occurs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Labels",
"sec_num": "7.3"
},
{
"text": "However, for LI-NTM (V2) we found that we were actually able to achieve lower perplexity than ETM when the model was told to produce more than 2 topics, even with uninformative labels. To understand why this was happening, we analyzed the accuracy of the original classifier in LI-NTM (V2) on both the worst-case labels (which it was trained on) and the ideal-case labels (which it was not trained on). We report our results in Table 2 . The key takeaway is that we observed a much higher accuracy on the ideal labels compared to the worst-case labels. This suggests that when \u03c1 = 0 the classifier implicitly learns the ideal labels that are necessary to learn a good reconstruction of the data, even when the provided labels are heavily uninformative or misspecified. This shows the benefit of label-indexing and of jointly learning our topic model and classifier in a semi-supervised fashion. Even in cases with uninformative data points, by setting \u03c1 = 0, the joint learning setting of our classifier and topic model pushes the classifier, through the need for successful document reconstruction, to generate a probability distribution over labels that is close to the true, ideal-case labels despite only being given uninformative or mis-labeled data.",
"cite_spans": [],
"ref_spans": [
{
"start": 428,
"end": 435,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Uninformative Labels",
"sec_num": "7.3"
},
{
"text": "We used the AG News dataset to evaluate the performance of LI-NTM in the semi-supervised setting. Specifically, we aimed to analyze the extent to which unlabeled data can improve the performance of both the classifier and topic model in the LI-NTM architecture. Ideally, in the unlabeled case, the distant supervision provided to the classifier from the reconstruction loss would align with the task of predicting correct labels. We ran four experiments on ETM and LI-NTM in which the amount of unlabeled data was gradually increased, while the amount of labeled data was kept fixed. In each of the experiments, 5% of the dataset was considered labeled, while 5%, 15%, 55%, and 95% of the whole dataset was considered unlabeled in each of the four experiments respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AG News Experimental Results",
"sec_num": "8"
},
{
"text": "Takeaway: Combining label-indexing with semi-supervised learning increases topic model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervised Learning: Topic Model Performance",
"sec_num": "8.1"
},
{
"text": "In Table 3 we observe that perplexity decreases as the model sees more unlabeled data. We also note that LI-NTM has a lower perplexity than ETM in higher data settings, supporting the hypothesis that guiding the reconstruction of a document exclusively via label-specific topics makes reconstruction an easier task. In the lowest data regime (5% labeled, 5% unlabeled), LI-NTM performs worse than ETM. This suggests that while in high-data settings, LI-NTM is able to effectively leverage L = 4 sets of topics, in low-data settings there are not enough documents to learn sufficient structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Semi-Supervised Learning: Topic Model Performance",
"sec_num": "8.1"
},
{
"text": "Takeaway: Topic modeling supervises the classifier, resulting in better classification performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervised Learning: Classifier Performance",
"sec_num": "8.2"
},
{
"text": "Jointly learning the classifier and topic model also seem to benefit the classifier; Table 3 shows classification performance increases linearly with the amount of unlabeled data. The accuracy increase suggest the task of reconstructing the bag of words is helpful in news article classification. Select topics learned from LI-NTM on the AG News Dataset are presented in Table 4 and the distributions are visualized in the appendix Figure A1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 92,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 371,
"end": 378,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 432,
"end": 441,
"text": "Figure A1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Semi-Supervised Learning: Classifier Performance",
"sec_num": "8.2"
},
{
"text": "In this paper, we introduced the LI-NTM, which, to the extent of our knowledge, is the first upstream neural topic model with applications to a semisupervised data setting. Our results show that when applied to both a synthetic dataset and AG News, LI-NTM outperforms ETM with respect to perplexity. Furthermore, we found that the classifier in LI-NTM was able to outperform a baseline that doesn't leverage any unlabeled data. Even more promising is the fact that the classifier in LI-NTM continued to experience gains in accuracy when increasing the proportion of unlabeled data. While we aim to iterate upon our results, our current findings indicate that LI-NTM is comparable with current state-of-the-art models while being applicable in a wider range of real-world settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "In future work, we hope to further experiment with the idea of label-indexing. While in LI-NTM every topic is label-specific, real datasets have some common words and topics that are labelagnostic. Future work could augment the existing LI-NTM framework with additional label-agnostic global topics which prevent identical topics from being learned across multiple labels. We are also interested in extending our semi-supervised, upstream paradigm to a semi-parametric setting in which the number of topics we learn is not a predefined hyperparameter but rather something that is learned. components and training them together lead to undesirable local minima (both perplexity and classification accuracy were undesirable). Instead, we consistently achieved our best results by first training the classifier normally on the task before training all three components together. All experimental results shown used this optimization procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "Please find the generative process for ETM below (Dieng et al., 2019a) . Note that ETM has two latent dimensions. There is the L-dimensional embedding space which the vocabulary is embedded into and each document is represented by K latent topics. Furthermore, note that in ETM, each topic is represented by a vector \u03b1 k \u2208 R L which is the embedded representation of the topic in embedding space. Furthermore, ETM defines an embedding matrix \u03c1 with dimension L \u00d7 K where the column \u03c1 v is the embedding of word v. Note that LINT-m has the nice property that topics are naturally sorted by label unlike ETM. The first topic, with words like \"series\", \"yankees\", \"red\", \"sox\", corresponds to baseball. Note that perplexity will still be high even if this topic is correctly given a high proportion in a baseball-themed news article since there are many potential baseball teams and baseball terminology that the article could be referencing. The second topic corresponds to search engines, and the third corresponds to the Israeli-Palestinian conflict.",
"cite_spans": [
{
"start": 49,
"end": 70,
"text": "(Dieng et al., 2019a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Embedded Topic Model (ETM)",
"sec_num": null
}
],
"back_matter": [
{
"text": "AS is supported by R01MH123804, and FDV is supported by NSF IIS-1750358. All authors acknowledge insightful feedback from members of CS282 Fall 2021.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "10"
},
{
"text": "ETM Perplexity LI-NTM Perplexity LI-NTM Accuracy Baseline Accuracy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Regime",
"sec_num": null
},
{
"text": "During optimization, there are three components of LI-NTM that are being trained: the encoder neural network, \u03b2 (the word distributions per label and topic), and the classifier neural network. We found randomly initializing all three trainable",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Optimization Procedure",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Probabilistic topic models",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2012,
"venue": "Communications of the ACM",
"volume": "55",
"issue": "4",
"pages": "77--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Blei. 2012. Probabilistic topic models. Com- munications of the ACM, 55(4):77-84.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hierarchical topic models and the nested chinese restaurant process",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Blei, Thomas L Griffiths, Michael I Jordan, Joshua B Tenenbaum, et al. Hierarchical topic mod- els and the nested chinese restaurant process.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A correlated topic model of science. The annals of applied statistics",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "John D",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "1",
"issue": "",
"pages": "17--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Blei and John D Lafferty. 2007. A correlated topic model of science. The annals of applied statis- tics, 1(1):17-35.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Supervised topic models",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Jon",
"middle": [
"D"
],
"last": "Blei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcauliffe",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei and Jon D. McAuliffe. 2010. Supervised topic models.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A novel neural topic model and its supervised extension",
"authors": [
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "29",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziqiang Cao, Sujian Li, Yang Liu, Wenjie Li, and Heng Ji. 2015. A novel neural topic model and its super- vised extension. In Proceedings of the AAAI Confer- ence on Artificial Intelligence, volume 29.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Topic modeling in embedding spaces",
"authors": [
{
"first": "B",
"middle": [],
"last": "Adji",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dieng",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Francisco",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Ruiz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adji B. Dieng, Francisco J. R. Ruiz, and David M. Blei. 2019a. Topic modeling in embedding spaces.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The dynamic embedded topic model",
"authors": [
{
"first": "B",
"middle": [],
"last": "Adji",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dieng",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Francisco",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Ruiz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.05545"
]
},
"num": null,
"urls": [],
"raw_text": "Adji B Dieng, Francisco JR Ruiz, and David M Blei. 2019b. The dynamic embedded topic model. arXiv preprint arXiv:1907.05545.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Topic model or topic twaddle? re-evaluating semantic interpretability measures",
"authors": [
{
"first": "Caitlin",
"middle": [],
"last": "Doogan",
"suffix": ""
},
{
"first": "Wray",
"middle": [],
"last": "Buntine",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "3824--3848",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.300"
]
},
"num": null,
"urls": [],
"raw_text": "Caitlin Doogan and Wray Buntine. 2021. Topic model or topic twaddle? re-evaluating semantic inter- pretability measures. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3824-3848, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Prediction-constrained hidden markov models for semi-supervised classification",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Hope",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Finale",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Erik",
"middle": [
"B"
],
"last": "Doshi-Velez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sudderth",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Hope, Michael C Hughes, Finale Doshi-Velez, and Erik B Sudderth. Prediction-constrained hidden markov models for semi-supervised classification.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Is automated topic model evaluation broken? the incoherence of coherence",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Hoyle",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Hian-Cheong",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Peskov",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2021,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Hoyle, Pranav Goel, Andrew Hian-Cheong, Denis Peskov, Jordan Boyd-Graber, and Philip Resnik. 2021. Is automated topic model evaluation broken? the incoherence of coherence. Advances in Neural Information Processing Systems, 34.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semi-supervised prediction-constrained topic models",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Hope",
"suffix": ""
},
{
"first": "Leah",
"middle": [],
"last": "Weiner",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Perlis",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Sudderth",
"suffix": ""
},
{
"first": "Finale",
"middle": [],
"last": "Doshi-Velez",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics",
"volume": "84",
"issue": "",
"pages": "1067--1076",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Hughes, Gabriel Hope, Leah Weiner, Thomas McCoy, Roy Perlis, Erik Sudderth, and Finale Doshi- Velez. 2018. Semi-supervised prediction-constrained topic models. In Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, volume 84 of Proceedings of Machine Learning Research, pages 1067-1076. PMLR.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Discriminative topic modeling based on manifold learning",
"authors": [
{
"first": "Seungil",
"middle": [],
"last": "Huh",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fienberg",
"suffix": ""
}
],
"year": 2012,
"venue": "ACM Transactions on Knowledge Discovery from Data (TKDD)",
"volume": "5",
"issue": "4",
"pages": "1--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seungil Huh and Stephen E Fienberg. 2012. Discrim- inative topic modeling based on manifold learning. ACM Transactions on Knowledge Discovery from Data (TKDD), 5(4):1-25.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Tomoharu Iwata. 2021. Few-shot learning for topic modeling",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2104.09011"
]
},
"num": null,
"urls": [],
"raw_text": "Tomoharu Iwata. 2021. Few-shot learning for topic modeling. arXiv preprint arXiv:2104.09011.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An introduction to variational methods for graphical models",
"authors": [
{
"first": "Zoubin",
"middle": [],
"last": "Michael I Jordan",
"suffix": ""
},
{
"first": "Tommi",
"middle": [
"S"
],
"last": "Ghahramani",
"suffix": ""
},
{
"first": "Lawrence K",
"middle": [],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Saul",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine learning",
"volume": "37",
"issue": "2",
"pages": "183--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. 1999. An intro- duction to variational methods for graphical models. Machine learning, 37(2):183-233.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semi-supervised learning with deep generative models",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Danilo",
"middle": [
"J"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "Shakir",
"middle": [],
"last": "Rezende",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma, Danilo J. Rezende, Shakir Mo- hamed, and Max Welling. 2014. Semi-supervised learning with deep generative models.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Autoencoding variational bayes",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Max Welling. 2014. Auto- encoding variational bayes.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Variational dropout and the local reparameterization trick",
"authors": [
{
"first": "P",
"middle": [],
"last": "Durk",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Salimans",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Durk P Kingma, Tim Salimans, and Max Welling. 2015. Variational dropout and the local reparameterization trick. Advances in neural information processing systems, 28.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Disclda: Discriminative learning for dimensionality reduction and classification",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Lacoste-Julien",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2008,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Lacoste-Julien, Fei Sha, and Michael Jordan. 2008. Disclda: Discriminative learning for dimen- sionality reduction and classification. Advances in neural information processing systems, 21.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "An overview of topic modeling and its current applications in bioinformatics",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Shaowen",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "SpringerPlus",
"volume": "5",
"issue": "1",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin Liu, Lin Tang, Wen Dong, Shaowen Yao, and Wei Zhou. 2016. An overview of topic modeling and its current applications in bioinformatics. SpringerPlus, 5(1):1-22.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Variational tempering",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Mandt",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Mcinerney",
"suffix": ""
},
{
"first": "Farhan",
"middle": [],
"last": "Abrol",
"suffix": ""
},
{
"first": "Rajesh",
"middle": [],
"last": "Ranganath",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2016,
"venue": "Artificial intelligence and statistics",
"volume": "",
"issue": "",
"pages": "704--712",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Mandt, James McInerney, Farhan Abrol, Ra- jesh Ranganath, and David Blei. 2016. Variational tempering. In Artificial intelligence and statistics, pages 704-712. PMLR.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sshlda: A semi-supervised hierarchical topic model",
"authors": [
{
"first": "Xianling",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Zhaoyan",
"middle": [],
"last": "Ming",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hongfei",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2012,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "800--809",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xianling Mao, Zhaoyan Ming, Tat-Seng Chua, Si Li, Hongfei Yan, and Xiaoming Li. 2012. Sshlda: A semi-supervised hierarchical topic model. In EMNLP-CoNLL, pages 800-809.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Neural variational inference for text processing",
"authors": [
{
"first": "Yishu",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Topic modeling with wasserstein autoencoders",
"authors": [
{
"first": "Feng",
"middle": [],
"last": "Nan",
"suffix": ""
},
{
"first": "Ran",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng Nan, Ran Ding, Ramesh Nallapati, and Bing Xi- ang. 2019. Topic modeling with wasserstein autoen- coders.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A hierarchical model of web summaries",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Petinot",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Kapil",
"middle": [],
"last": "Thadani",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "670--675",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Petinot, Kathleen McKeown, and Kapil Thadani. 2011. A hierarchical model of web summaries. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 670-675, Portland, Ore- gon, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Labeled lda: A supervised topic model for credit attribution in multilabeled corpora",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Ramage",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "248--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Ramage, David Hall, Ramesh Nallapati, and Christopher D. Manning. 2009. Labeled lda: A su- pervised topic model for credit attribution in multi- labeled corpora. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing: Volume 1 -Volume 1, EMNLP '09, page 248-256, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Prediction focused topic models via feature selection",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Russell",
"middle": [],
"last": "Kunes",
"suffix": ""
},
{
"first": "Finale",
"middle": [],
"last": "Doshi-Velez",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "4420--4429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Ren, Russell Kunes, and Finale Doshi-Velez. 2020. Prediction focused topic models via feature selection. In International Conference on Artificial Intelligence and Statistics, pages 4420-4429. PMLR.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Stochastic backpropagation and approximate inference in deep generative models",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Jimenez Rezende",
"suffix": ""
},
{
"first": "Shakir",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Wierstra",
"suffix": ""
}
],
"year": 2014,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "1278--1286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and ap- proximate inference in deep generative models. In International conference on machine learning, pages 1278-1286. PMLR.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The author-topic model for authors and documents",
"authors": [
{
"first": "Michal",
"middle": [],
"last": "Rosen-Zvi",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "Padhraic",
"middle": [],
"last": "Smyth",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1207.4169"
]
},
"num": null,
"urls": [],
"raw_text": "Michal Rosen-Zvi, Thomas Griffiths, Mark Steyvers, and Padhraic Smyth. 2012. The author-topic model for authors and documents. arXiv preprint arXiv:1207.4169.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Chemical topic modeling: Exploring molecular data sets using a common text-mining approach",
"authors": [
{
"first": "Nadine",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Nikolas",
"middle": [],
"last": "Fechner",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"A"
],
"last": "Landrum",
"suffix": ""
},
{
"first": "Nikolaus",
"middle": [],
"last": "Stiefl",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Chemical Information and Modeling",
"volume": "57",
"issue": "8",
"pages": "1816--1831",
"other_ids": {
"DOI": [
"10.1021/acs.jcim.7b00249"
],
"PMID": [
"28715190"
]
},
"num": null,
"urls": [],
"raw_text": "Nadine Schneider, Nikolas Fechner, Gregory A. Lan- drum, and Nikolaus Stiefl. 2017. Chemical topic modeling: Exploring molecular data sets using a common text-mining approach. Journal of Chem- ical Information and Modeling, 57(8):1816-1831. PMID: 28715190.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "On learning prediction-focused mixtures",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Sanjana",
"middle": [],
"last": "Narayanan",
"suffix": ""
},
{
"first": "Sonali",
"middle": [],
"last": "Parbhoo",
"suffix": ""
},
{
"first": "Finale",
"middle": [],
"last": "Doshi-Velez",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2110.13221"
]
},
"num": null,
"urls": [],
"raw_text": "Abhishek Sharma, Catherine Zeng, Sanjana Narayanan, Sonali Parbhoo, and Finale Doshi-Velez. 2021. On learning prediction-focused mixtures. arXiv preprint arXiv:2110.13221.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Autoencoding variational inference for topic models",
"authors": [
{
"first": "Akash",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akash Srivastava and Charles Sutton. 2017. Autoencod- ing variational inference for topic models.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Neural topic model with attention for supervised learning",
"authors": [
{
"first": "Xinyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "1147--1156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyi Wang and Yi Yang. 2020. Neural topic model with attention for supervised learning. In Interna- tional Conference on Artificial Intelligence and Statis- tics, pages 1147-1156. PMLR.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Tim Salimans, Rodolphe Jenatton, and Sebastian Nowozin. 2020. How good is the bayes posterior in deep neural networks really? In International Conference on Machine Learning",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Wenzel",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Bastiaan",
"middle": [],
"last": "Veeling",
"suffix": ""
},
{
"first": "Jakub",
"middle": [],
"last": "Swiatkowski",
"suffix": ""
},
{
"first": "Linh",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Mandt",
"suffix": ""
},
{
"first": "Jasper",
"middle": [],
"last": "Snoek",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "10248--10259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florian Wenzel, Kevin Roth, Bastiaan Veeling, Jakub Swiatkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, and Sebas- tian Nowozin. 2020. How good is the bayes posterior in deep neural networks really? In International Con- ference on Machine Learning, pages 10248-10259. PMLR.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A biterm topic model for short texts",
"authors": [
{
"first": "Xiaohui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Jiafeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22nd international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "1445--1456",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2013. A biterm topic model for short texts. In Proceedings of the 22nd international conference on World Wide Web, pages 1445-1456.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Hierarchical regularization cascade for joint learning",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "Daphna",
"middle": [],
"last": "Weinshall",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "37--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Zweig and Daphna Weinshall. 2013. Hierarchical regularization cascade for joint learning. In Inter- national Conference on Machine Learning, pages 37-45. PMLR.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Generative process for downstream vs upstream supervision. Note that in upstream supervision, the label, y, supervises the document, x, whereas in downstream supervision the document supervises the label. \u03b8 is an arbitrary latent variable, in our case representing topic proportions."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "d |x d ) in modeling the untransformed topic proportions and q \u03bd (y d |x d ) in modeling the classifier. More specifically, q \u03d5 (\u03b4 d |x d ) is a Gaussian whose mean and variance are parameterized by neural networks with parameter \u03d5 and q \u03bd (y d |x d ) is a distribution over the labels parameterized by a MLP with parameter \u03bd (Kingma and Welling, 2014; Kingma et al., 2014)."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Draw topic proportions \u03b8 d \u223c LN (0, I) 2. For each word n in document: (a) Draw topic assignment z dn \u223c Cat(\u03b8 d ) (b) Draw word w dn \u223c softmax(\u03c1 T \u03b1 z dn )A.3 Visualization of TopicsSeeFigure A1"
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The probabilities of the top words from 5 selected topics from LINT-m."
},
"TABREF0": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>for Compute the ELBO and its gradient (back-end for</td></tr><tr><td>prop.)</td></tr><tr><td>Update model parameters \u03b2</td></tr><tr><td>Update variational parameters (\u03d5 \u00b5 , \u03d5 \u03a3 , \u03bd) end for</td></tr><tr><td>5 Inference and Estimation</td></tr></table>",
"text": "Given a corpus of normalized bag-of-word representation of documents x 1 , x 2 , \u2022 \u2022 \u2022 , x d we aim to fit LI-NTM using variational inference in order to approximate intractable posteriors in maximum likelihood estimation",
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>2 8 20</td><td>11.78 11.27 10.88</td><td>11.42 10.72 10.50</td><td>19.71 12.83 11.20</td><td>18.70 10.90 10.77</td><td>\u2212 \u2212 9.50</td></tr><tr><td/><td colspan=\"4\">Total Num. Topics Worst Case Labels Ideal Case Labels</td><td/></tr><tr><td/><td/><td>2 8 20</td><td>50.2 \u00b1 0.6 50.4 \u00b1 0.5 50.4 \u00b1 0.2</td><td>54.2 \u00b1 2.0 84.3 \u00b1 8.8 93.7 \u00b1 6.2</td><td/></tr></table>",
"text": "Total Num. Topics ETM Ideal LI-NTM WC LI-NTM (V1) WC LI-NTM (V2) Perplexity Lower Bound Perplexities of LI-NTM (ideal and worst case synthetic data) compared to ETM for a varied number of topics. WC LI-NTM (V1) corresponds to training the model normally in the worst case setting, while WC LI-NTM (V2) corresponds to training with \u03c1 = 0. Note that LI-NTM is able to outperform ETM in both the ideal and worst case scenarios.",
"num": null
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Sports Science/Technology</td><td>World</td><td>Business</td></tr><tr><td>series</td><td>web</td><td>minister</td><td>stocks</td></tr><tr><td>game</td><td>search</td><td>prime</td><td>oil</td></tr><tr><td>red</td><td>google</td><td>palestinian</td><td>prices</td></tr><tr><td>boston</td><td>new</td><td>gaza</td><td>reuters</td></tr><tr><td>run</td><td>online</td><td>israel</td><td>company</td></tr><tr><td>night</td><td>site</td><td>leader</td><td>shares</td></tr><tr><td>league</td><td>internet</td><td>arafat</td><td>inc</td></tr><tr><td>yankees</td><td>engine</td><td>said</td><td>percent</td></tr><tr><td>new</td><td>com</td><td>yasser</td><td>yesterday</td></tr><tr><td>york</td><td>yahoo</td><td>sharon</td><td>percent</td></tr></table>",
"text": "The results from ETM, LI-NTM, and a baseline classifier for the AG News dataset. The baseline classifier was the same for each data regime, hence the duplicate values. Note that in the high data settings, LI-NTM outperformed ETM in terms of perplexity, although in the lowest data setting, the lack of data hurt LI-NTM since it further partitions the topics by labels. Accuracy increased near linearly as unlabeled data increased.",
"num": null
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Example topics (top ten words) corresponding to each label from LI-NTM run on the AG-News Dataset. Each topic is assigned a label and it is clear that the distribution of words for each topic depends on the label.",
"num": null
}
}
}
}