|
{ |
|
"paper_id": "D11-1024", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:33:50.987402Z" |
|
}, |
|
"title": "Optimizing Semantic Coherence in Topic Models", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mimno", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Hanna", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Wallach", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Edmund", |
|
"middle": [], |
|
"last": "Talley", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Miriam", |
|
"middle": [], |
|
"last": "Leenders", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Latent variable models have the potential to add value to large document collections by discovering interpretable, low-dimensional subspaces. In order for people to use such models, however, they must trust them. Unfortunately, typical dimensionality reduction methods for text, such as latent Dirichlet allocation, often produce low-dimensional subspaces (topics) that are obviously flawed to human domain experts. The contributions of this paper are threefold: (1) An analysis of the ways in which topics can be flawed; (2) an automated evaluation metric for identifying such topics that does not rely on human annotators or reference collections outside the training data; (3) a novel statistical topic model based on this metric that significantly improves topic quality in a large-scale document collection from the National Institutes of Health (NIH).", |
|
"pdf_parse": { |
|
"paper_id": "D11-1024", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Latent variable models have the potential to add value to large document collections by discovering interpretable, low-dimensional subspaces. In order for people to use such models, however, they must trust them. Unfortunately, typical dimensionality reduction methods for text, such as latent Dirichlet allocation, often produce low-dimensional subspaces (topics) that are obviously flawed to human domain experts. The contributions of this paper are threefold: (1) An analysis of the ways in which topics can be flawed; (2) an automated evaluation metric for identifying such topics that does not rely on human annotators or reference collections outside the training data; (3) a novel statistical topic model based on this metric that significantly improves topic quality in a large-scale document collection from the National Institutes of Health (NIH).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Statistical topic models such as latent Dirichlet allocation (LDA) (Blei et al., 2003) provide a powerful framework for representing and summarizing the contents of large document collections. In our experience, however, the primary obstacle to acceptance of statistical topic models by users the outside machine learning community is the presence of poor quality topics. Topics that mix unrelated or looselyrelated concepts substantially reduce users' confidence in the utility of such automated systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 86, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In general, users prefer models with larger numbers of topics because such models have greater resolution and are able to support finer-grained distinctions. Unfortunately, we have observed that there is a strong relationship between the size of topics and the probability of topics being nonsensical as judged by domain experts: as the number of topics increases, the smallest topics (number of word tokens assigned to each topic) are almost always poor quality. The common practice of displaying only a small number of example topics hides the fact that as many as 10% of topics may be so bad that they cannot be shown without reducing users' confidence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The evaluation of statistical topic models has traditionally been dominated by either extrinsic methods (i.e., using the inferred topics to perform some external task such as information retrieval (Wei and Croft, 2006) ) or quantitative intrinsic methods, such as computing the probability of held-out documents (Wallach et al., 2009) . Recent work has focused on evaluation of topics as semanticallycoherent concepts. For example, Chang et al. (2009) found that the probability of held-out documents is not always a good predictor of human judgments. Newman et al. (2010) showed that an automated evaluation metric based on word co-occurrence statistics gathered from Wikipedia could predict human evaluations of topic quality. AlSumait et al. (2009) used differences between topic-specific distributions over words and the corpus-wide distribution over words to identify overly-general \"vacuous\" topics. Finally, Andrzejewski et al. (2009) developed semi-supervised methods that avoid specific user-labeled semantic coherence problems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 218, |
|
"text": "(Wei and Croft, 2006)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 312, |
|
"end": 334, |
|
"text": "(Wallach et al., 2009)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 432, |
|
"end": 451, |
|
"text": "Chang et al. (2009)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 552, |
|
"end": 572, |
|
"text": "Newman et al. (2010)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 729, |
|
"end": 751, |
|
"text": "AlSumait et al. (2009)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 915, |
|
"end": 941, |
|
"text": "Andrzejewski et al. (2009)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The contributions of this paper are threefold: (1) To identify distinct classes of low-quality topics, some of which are not flagged by existing evaluation methods; (2) to introduce a new topic \"coherence\" score that corresponds well with human coherence judgments and makes it possible to identify specific semantic problems in topic models without human evaluations or external reference corpora; (3) to present an example of a new topic model that learns latent topics by directly optimizing a metric of topic coherence. With little additional computational cost beyond that of LDA, this model exhibits significant gains in average topic coherence score. Although the model does not result in a statisticallysignificant reduction in the number of topics marked \"bad\", the model consistently improves the topic coherence score of the ten lowest-scoring topics (i.e., results in bad topics that are \"less bad\" than those found using LDA) while retaining the ability to identify low-quality topics without human interaction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "LDA is a generative probabilistic model for documents W = {w (1) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 64, |
|
"text": "(1)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "}. To generate a word token w", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(d) n in document d, we draw a discrete topic assignment z (d)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "n from a document-specific distribution over the T topics \u03b8 d (which is itself drawn from a Dirichlet prior with hyperparameter \u03b1), and then draw a word type for that token from the topicspecific distribution over the vocabulary \u03c6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "z (d) n .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The inference task in topic models is generally cast as inferring the document-topic proportions {\u03b8 1 , ..., \u03b8 D } and the topic-specific distributions {\u03c6 1 . . . , \u03c6 T }.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The multinomial topic distributions are usually drawn from a shared symmetric Dirichlet prior with hyperparameter \u03b2, such that conditioned on {\u03c6 t } T t=1 and the topic assignments {z (1) , z (2) , . . . , z (D) }, the word tokens are independent. In practice, however, it is common to deal directly with the \"collapsed\" distributions that result from integrating over the topic-specific multinomial parameters. The resulting distribution over words for a topic t is then a function of the hyperparameter \u03b2 and the number of words of each type assigned to that topic, N w|t . This distribution, known as the Dirichlet compound multinomial (DCM) or P\u00f3lya distribution (Doyle and Elkan, 2009) , breaks the assumption of conditional independence between word tokens given topics, but is useful during inference because the conditional probability of a word w given topic t takes a very simple form:", |
|
"cite_spans": [ |
|
{ |
|
"start": 667, |
|
"end": 690, |
|
"text": "(Doyle and Elkan, 2009)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "P (w | t, \u03b2) = N w|t +\u03b2 Nt+|V| \u03b2 , where N t = w N w |t", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "and |V| is the vocabulary size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The process for generating a sequence of words from such a model is known as the simple P\u00f3lya urn model (Mahmoud, 2008) , in which the initial probability of word type w in topic t is proportional to \u03b2, while the probability of each subsequent occurrence of w in topic t is proportional to the number of times w has been drawn in that topic plus \u03b2. Note that this unnormalized weight for each word type depends only on the count of that word type, and is independent of the count of any other word type w . Thus, in the DCM/P\u00f3lya distribution, drawing word type w must decrease the probability of seeing all other word types w = w. In a later section, we will introduce a topic model that substitutes a generalized P\u00f3lya urn model for the DCM/P\u00f3lya distribution, allowing a draw of word type w to increase the probability of seeing certain other word types. For real-world data, documents W are observed, while the corresponding topic assignments Z are unobserved and may be inferred using either variational methods (Blei et al., 2003; Teh et al., 2006) or MCMC methods (Griffiths and Steyvers, 2004) . Here, we use MCMC methods-specifically Gibbs sampling (Geman and Geman, 1984) , which involves sequentially resampling each topic assignment z (d) n from its conditional posterior given the documents W, the hyperparameters \u03b1 and \u03b2, and Z \\d,n (the current topic assignments for all tokens other than the token at position n in document d).", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 119, |
|
"text": "(Mahmoud, 2008)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1017, |
|
"end": 1036, |
|
"text": "(Blei et al., 2003;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1037, |
|
"end": 1054, |
|
"text": "Teh et al., 2006)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1071, |
|
"end": 1101, |
|
"text": "(Griffiths and Steyvers, 2004)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1158, |
|
"end": 1181, |
|
"text": "(Geman and Geman, 1984)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1247, |
|
"end": 1250, |
|
"text": "(d)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Concentrating on 300,000 grant and related journal paper abstracts from the National Institutes of Health (NIH), we worked with two experts from the National Institute of Neurological Disorders and Stroke (NINDS) to collaboratively design an expertdriven topic annotation study. The goal of this study was to develop an annotated set of baseline topics, along with their salient characteristics, as a first step towards automatically identifying and inferring the kinds of topics desired by domain experts. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Expert Opinions of Topic Quality", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In order to ensure that the topics selected for annotation were within the NINDS experts' area of expertise, they selected 148 topics (out of 500), all associated with areas funded by NINDS. Each topic t was presented to the experts as a list of the thirty most probable words for that topic, in descending order of their topic-specific \"collapsed\" probabilities,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Expert-Driven Annotation Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "N w|t +\u03b2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Expert-Driven Annotation Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Nt+|V| \u03b2 . In addition to the most probable words, the experts were also given metadata for each topic: The most common sequences of two or more consecutive words assigned to that topic, the four topics that most often co-occurred with that topic, the most common IDF-weighted words from titles of grants, thesaurus terms, NIH institutes, journal titles, and finally a list of the highest probability grants and PubMed papers for that topic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Expert-Driven Annotation Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The experts first categorized each topic as one of three types: \"research\", \"grant mechanisms and publication types\" or \"general\". 2 The quality of each topic (\"good\", \"intermediate\", or \"bad\") was then evaluated using criteria specific to the type of topic. In general, topics were only annotated as \"good\" if they contained words that could be grouped together as a single coherent concept. Additionally, each \"research\" topic was only considered to be \"good\" if, in addition to representing a single coherent concept, the aggregate content of the set of documents with appreciable allocations to that topic clearly contained text referring to the concept inferred from the topic words. Finally, for each topic marked as being either \"intermediate\" or \"bad\", one or more of the following problems (defined by the domain experts) was identified, as appropriate:", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 132, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Expert-Driven Annotation Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Chained: every word is connected to every other word through some pairwise word chain, but not all word pairs make sense. For example, a topic whose top three words are \"acids\", \"fatty\" and \"nucleic\" consists of two distinct concepts (i.e., acids produced when fats are broken down versus the building blocks of DNA and RNA) chained via the word \"acids\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Expert-Driven Annotation Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Intruded: either two or more unrelated sets of related words, joined arbitrarily, or an otherwise good topic with a few \"intruder\" words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Expert-Driven Annotation Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Random: no clear, sensical connections between more than a few pairs of words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Expert-Driven Annotation Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Unbalanced: the top words are all logically connected to each other, but the topic combines very general and specific terms (e.g., \"signal 2 Equivalent to \"vacuous topics\" of AlSumait et al. (2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 199, |
|
"text": "(2009)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Expert-Driven Annotation Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "transduction\" versus \"notch signaling\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Expert-Driven Annotation Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Examples of a good general topic, a good research topic, and a chained research topic are in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 100, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Expert-Driven Annotation Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The experts annotated the topics independently and then aggregated their results. Interestingly, no topics were ever considered \"good\" by one expert and \"bad\" by the other-when there was disagreement between the experts, one expert always believed the topic to be \"intermediate.\" In such cases, the experts discussed the reasons for their decisions and came to a consensus. Of the 148 topics selected for annotation, 90 were labeled as \"good,\" 21 as \"intermediate,\" and 37 as \"bad.\" Of the topics labeled as \"bad\" or \"intermediate,\" 23 were \"chained,\" 21 were \"intruded,\" 3 were \"random,\" and 15 were \"unbalanced\". (The annotators were permitted to assign more than one problem to any given topic.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Results", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The ultimate goal of this paper is to develop methods for building models with large numbers of specific, high-quality topics from domain-specific corpora. We therefore explore the extent to which information already contained in the documents being modeled can be used to assess topic quality. In this section we evaluate several methods for ranking the quality of topics and compare these rankings to human annotations. No method is likely to perfectly predict human judgments, as individual annotators may disagree on particular topics. For an application involving removing low quality topics we recommend using a weighted combination of metrics, with a threshold determined by users.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automated Metrics for Predicting Expert Annotations", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As a simple baseline, we considered the extent to which topic \"size\" (as measured by the number of tokens assigned to each topic via Gibbs sampling) is a good metric for assessing topic quality. Figure 1 (top) displays the topic size (number of tokens assigned to that topic) and expert annotations (\"good\", \"intermediate\", \"bad\") for the 148 topics manually labeled by annotators as described above. This figure suggests that topic size is a reasonable predic- tor of topic quality. Although there is some overlap, \"bad\" topics are generally smaller than \"good\" topics. Unfortunately, this observation conflicts with the goal of building highly specialized, domainspecific topic models with many high-quality, finegrained topics-in such models the majority of topics will have relatively few tokens assigned to them.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 203, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Topic Size", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 40000 60000 80000 120000 160000 Tokens good inter bad q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q \u2212", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Size", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "When displaying topics to users, each topic t is generally represented as a list of the M = 5, . . . , 20 most probable words for that topic, in descending order of their topic-specific \"collapsed\" probabilities. Although there has been previous work on automated generation of labels or headings for topics (Mei et al., 2007) , we choose to work only with the ordered list representation. Labels may obscure or detract from fundamental problems with topic coherence, and better labels don't make bad topics good.", |
|
"cite_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 326, |
|
"text": "(Mei et al., 2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Coherence", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The expert-driven annotation study described in section 3 suggests that three of the four types of poor-quality topics (\"chained,\" \"intruded\" and \"random\") could be detected using a metric based on the co-occurrence of words within the documents being modeled. For \"chained\" and \"intruded\" topics, it is likely that although pairs of words belonging to a single concept will co-occur within a single document (e.g., \"nucleic\" and \"acids\" in documents about DNA), word pairs belonging to different concepts (e.g., \"fatty\" and \"nucleic\") will not. For random topics, it is likely that few words will co-occur.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Coherence", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "This insight can be used to design a new metric for assessing topic quality. Letting D(v) be the document frequency of word type v (i.e., the number of documents with least one token of type v) and D(v, v ) be co-document frequency of word types v and v (i.e., the number of documents containing one or more tokens of type v and at least one token of type v ), we define topic coherence as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Coherence", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "C(t; V (t) ) = M m=2 m\u22121 l=1 log D(v (t) m , v (t) l ) + 1 D(v (t) l ) , (1) where V (t) = (v (t) 1 , . . . , v (t) M", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Coherence", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": ") is a list of the M most probable words in topic t. A smoothing count of 1 is included to avoid taking the logarithm of zero. Figure 1 shows the association between the expert annotations and both topic size (top) and our coherence metric (bottom). We evaluate these results using standard ranking metrics, average precision and the area under the ROC curve. Treating \"good\" topics as positive and \"intermediate\" or \"bad\" topics as negative, we get average precision values of 0.89 for topic size vs. 0.94 for coherence and AUC 0.79 for topic size vs. 0.87 for coherence. We performed a logistic regression analysis on the binary variable \"is this topic bad\". Using topic size alone as a predictor gives AIC (a measure of model fit) 152.5. Coherence alone has AIC 113.8 (substantially better). Both predictors combined have AIC 115.8: the simpler coherence alone model provides the best performance. We tried weighting the terms in equation 1 by their corresponding topic-word probabilities and and by their position in the sorted list of the M most probable words for that topic, but we found that a uniform weighting better predicted topic quality.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 135, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Topic Coherence", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Our topic coherence metric also exhibits good qualitative behavior: of the 20 best-scoring topics, 18 are labeled as \"good,\" one is \"intermediate\" (\"unbalanced\"), and one is \"bad\" (combining \"cortex\" and \"fmri\", words that commonly co-occur, but are conceptually distinct). Of the 20 worst scoring topics, 15 are \"bad,\" 4 are \"intermediate,\" and only one (with the 19th worst coherence score) is \"good.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Coherence", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Our coherence metric relies only upon word cooccurrence statistics gathered from the corpus being modeled, and does not depend on an external reference corpus. Ideally, all such co-occurrence information would already be accounted for in the model. We believe that one of the main contributions of our work is demonstrating that standard topic models do not fully utilize available co-occurrence information, and that a held-out reference corpus is therefore not required for purposes of topic evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Coherence", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Equation 1 is very similar to pointwise mutual information (PMI), but is more closely associated with our expert annotations than PMI (which achieves AUC 0.64 and AIC 170.51). PMI has a long history in language technology (Church and Hanks, 1990) , and was recently used by Newman et al. (2010) to evaluate topic models. When expressed in terms of count variables as in equation 1, PMI includes an additional term for D", |
|
"cite_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 246, |
|
"text": "(Church and Hanks, 1990)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 294, |
|
"text": "Newman et al. (2010)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Coherence", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "(v (t) m", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Coherence", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "). The improved performance of our metric over PMI implies that what matters is not the difference between the joint probability of words m and l and the product of marginals, but the conditional probability of each word given the each of the higher-ranked words in the topic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Coherence", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In order to provide intuition for the behavior of our topic coherence metric, table 1 shows three example topics and their topic coherence scores. The first topic, related to grant-funded training programs, is one of the best-scoring topics. All pairs of words have high co-document frequencies. The second topic, on neurons, is more typical of quality \"research\" topics. Overall, these words occur less frequently, but generally occur moderately interchangeably: there is little structure to their covariance. The last topic is one of the lowest-scoring topics. Its co-document frequency matrix is shown in table 2. The top two words are closely related: 487 documents include \"aging\" at least once, 122 include \"lifespan\", and 55 include both. Meanwhile, the third word \"globin\" occurs with only one of the top seven words-the common word \"human\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Coherence", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As an additional check for both our expert annotations and our automated metric, we replicated the \"word intrusion\" evaluation originally introduced by Chang et al. (2009) . In this task, one of the top ten most probable words in a topic is replaced with a another word, selected at random from the corpus. The resulting set of words is presented, in a random order, to users, who are asked to identify the \"intruder\" word. It is very unlikely that a randomlychosen word will be semantically related to any of the original words in the topic, so if a topic is a high quality representation of a semantically coherent concept, it should be easy for users to select the intruder word. If the topic is not coherent, there may be words in the topic that are also not semantically related to any other word, thus causing users to select \"correct\" words instead of the real intruder.", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 171, |
|
"text": "Chang et al. (2009)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison to word intrusion", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison to word intrusion", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We recruited ten additional expert annotators from NINDS, not including our original annotators, and presented them with the intruder selection task, using the set of previously evaluated topics. Results are shown in figure 2. In the first two plots, the x-axis is one of our two automated quality met- Table 1 : Example topics (good/general, good/research, chained/research) with different coherence scores (numbers closer to zero indicate higher coherence). The chained topic combines words related to aging (indicated in plain text) and words describing blood and blood-related diseases (bold). The only connection is the common word human. -167.1 students, program, summer, biomedical, training, experience, undergraduate, career, minority, student, careers, underrepresented, medical students, week, science -252.1 neurons, neuronal, brain, axon, neuron, guidance, nervous system, cns, axons, neural, axonal, cortical, survival, disorders, motor -357.2 rics (topic size and coherence) and the y-axis is the number of annotators that correctly identified the true intruder word (accuracy). The histograms below these plots show the number of topics with each level of annotator accuracy for good and bad topics. For good topics (green circles), the annotators were generally able to detect the intruder word with high accuracy. Bad topics (red diamonds) had more uniform accuracies. These results suggest that topics with low intruder detection accuracy tend to be bad, but some bad topics can have a high accuracy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 644, |
|
"end": 957, |
|
"text": "-167.1 students, program, summer, biomedical, training, experience, undergraduate, career, minority, student, careers, underrepresented, medical students, week, science -252.1 neurons, neuronal, brain, axon, neuron, guidance, nervous system, cns, axons, neural, axonal, cortical, survival, disorders, motor -357.2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 303, |
|
"end": 310, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison to word intrusion", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For example, spotting an intruder word in a chained topic can be easy. The low-quality topic receptors, cannabinoid, cannabinoids, ligands, cannabis, endocannabinoid, cxcr4, [virus] , receptor, sdf1, is a typical \"chained\" topic, with CXCR4 linked to cannabinoids only through receptors, and otherwise unrelated. Eight out of ten annotators correctly identified \"virus\" as the correct intruder. Repeating the logistic regression experiment using intruder detection accuracy as input, the AIC value is 163.18much worse than either topic size or coherence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 181, |
|
"text": "receptors, cannabinoid, cannabinoids, ligands, cannabis, endocannabinoid, cxcr4, [virus]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison to word intrusion", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Although the topic coherence metric defined above provides an accurate way of assessing topic quality, preventing poor quality topics from occurring in the first place is preferable. Our results in the previous section show that we can identify low-quality topics without making use of external supervision; the training data by itself contains sufficient information at least to reject poor combinations of words. In this section, we describe a new topic model that incorporates the corpus-specific word co-occurrence information used in our coherence metric directly into the statistical topic modeling framework. It is important to note that simply disallowing words that never co-occur from being assigned to the same topic is not sufficient. Due to the power-law characteristics of language, most words are rare and will not co-occur with most other words regardless of their semantic similarity. It is rather the degree to which the most prominent words in a topic do not co-occur with the other most prominent words in that topic that is an indicator of topic incoherence. We therefore desire models that guide topics towards semantic similarity without imposing hard constraints.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalized P\u00f3lya Urn Models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As an example of such a model, we present a new topic model in which the occurrence of word type w in topic t increases not only the probability of seeing that word type again, but also increases the probability of seeing other related words (as determined by co-document frequencies for the corpus being modeled). This new topic model retains the documenttopic component of standard LDA, but replaces the usual P\u00f3lya urn topic-word component with a generalized P\u00f3lya urn framework (Mahmoud, 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 482, |
|
"end": 497, |
|
"text": "(Mahmoud, 2008)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalized P\u00f3lya Urn Models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "A sequence of i.i.d. samples from a discrete distribution can be imagined as arising by repeatedly drawing a random ball from an urn, where the number of balls of each color is proportional to the probability of that color, replacing the selected ball after each draw. In a P\u00f3lya urn, each ball is replaced along with another ball of the same color. Samples from this model exhibit the \"burstiness\" property: the probability of drawing a ball of color w increases each time a ball of that color is drawn. This process represents the marginal distribution of a hierarchical model with a Dirichlet prior and a multinomial likelihood, and is used as the distribution over words for each topic in almost all previous topic models. In a generalized P\u00f3lya urn model, having drawn a ball of color w, A vw additional balls of each color v \u2208 {1, . . . , W } are returned to the urn. Given W and Z, the conditional posterior probability of word w in topic t implied by this generalized model is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalized P\u00f3lya Urn Models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (w | t, W, Z, \u03b2, A) = v N v|t A vw + \u03b2 N t + |V|\u03b2 ,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Generalized P\u00f3lya Urn Models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where A is a W \u00d7 W real-valued matrix, known as the addition matrix or schema. The simple P\u00f3lya urn model (and hence the conditional posterior probability of word w in topic t under LDA) can be recovered by setting the schema A to the identity matrix. Unlike the simple P\u00f3lya distribution, we do not know of a representation of the generalized P\u00f3lya urn distribution that can be expressed using a concise set of conditional independence assumptions. A standard graphical model with plate notation would therefore not be helpful in highlighting the differences between the two models, and is not shown. Algorithm 1 shows pseudocode for a single Gibbs sweep over the latent variables Z in standard LDA. Algorithm 2 shows the modifications necessary to 268 new topic. As long as A is sparse, this operation adds only a constant factor to the computation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalized P\u00f3lya Urn Models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Another property of the generalized P\u00f3lya urn model is that it is nonexchangeable-the joint probability of the tokens in any given topic is not invariant to permutation of those tokens. Inference of Z given W via Gibbs sampling involves repeatedly cycling through the tokens in W and, for each one, resampling its topic assignment conditioned on W and the current topic assignments for all tokens other than the token of interest. For LDA, the sampling distribution for each topic assignment is simply the product of two predictive probabilities, obtained by treating the token of interest as if it were the last. For a topic model with a generalized P\u00f3lya urn for the topic-word component, the sampling distribution is more complicated. Specifically, the topicword component of the sampling distribution is no longer a simple predictive distribution-when sampling a new value for z (d) n , the implication of each possible value for subsequent tokens and their topic assignments must be considered. Unfortunately, this can be very computationally expensive, particularly for large corpora. There are several ways around this problem. The first is to use sequential Monte Carlo methods, which have been successfully applied to topic models previously (Canini et al., 2009) . The second approach is to approximate the true Gibbs sampling distribution by treating each token as if it were the last, ignoring implications for subsequent tokens and their topic assignments. We find that this approximate method performs well empirically.", |
|
"cite_spans": [ |
|
{ |
|
"start": 883, |
|
"end": 886, |
|
"text": "(d)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1251, |
|
"end": 1272, |
|
"text": "(Canini et al., 2009)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalized P\u00f3lya Urn Models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Inspired by our evaluation metric, we define A as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting the Schema A", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "A vv \u221d \u03bb v D(v)", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Setting the Schema A", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "A vw \u221d \u03bb v D(w, v)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting the Schema A", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "where each element is scaled by a row-specific weight \u03bb v and each column is normalized to sum to 1. Normalizing columns makes comparison to standard LDA simpler, because the relative effect of smoothing parameter \u03b2 = 0.01 is equivalent. We set", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting the Schema A", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "\u03bb v = log (D / D(v))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting the Schema A", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": ", the standard IDF weight used in information retrieval, which is larger for less frequent words. The column for word type w can be interpreted as word types with significant association with w. The IDF weighting therefore has the effect of increasing the strength of association for rare word types. We also found empirically that it is helpful to remove off-diagonal elements for the most common types, such as those that occur in more than 5% of documents (IDF < 3.0). Including nonzero off-diagonal values in A for very frequent types causes the model to disperse those types over many topics, which leads to large numbers of extremely similar topics. To measure this effect, we calculated the Jensen-Shannon divergence between all pairs of topic-word distributions in a given model. types, the mean of the 100 lowest divergences was 0.29 \u00b1 .05 (a divergence of 1.0 represents distributions with no shared support) at T = 200. The average divergence of the 100 most similar pairs of topics for standard LDA (i.e., A = I) is 0.67\u00b1.05. The same statistic for the generalized P\u00f3lya urn model without off-diagonal elements for word types with high document frequency is 0.822 \u00b1 0.09. Setting the off-diagonal elements of the schema A to zero for the most common word types also has the fortunate effect of substantially reducing preprocessing time. We find that Gibbs sampling for the generalized P\u00f3lya model takes roughly two to three times longer than for standard LDA, depending on the sparsity of the schema, due to additional bookkeeping needed before and after sampling topics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting the Schema A", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We evaluated the new model on a corpus of NIH grant abstracts. Details are given in table 3. Figure 3 shows the performance of the generalized P\u00f3lya urn model relative to LDA. Two metrics-our new topic coherence metric and the log probability of held-out documents-are shown over 1000 iterations at 50 iteration intervals. Each model was run over five folds of cross validation, each with three random initializations. For each model we calculated an overall coherence score by calculating the topic coherence for each topic individually and then averaging these values. We report the average over all 15 models in each plot. Held-out probabilities were calculated using the left-to-right method of Wallach et al. (2009) , with each cross-validation fold using its own schema A. The generalized P\u00f3lya model performs very well in average topic coherence, reaching levels within the first 50 iterations that match the final score. This model has an early advantage for held-out probability as well, but is eventually overtaken by LDA. This trend is consistent with Chang et al.'s observation that held-out probabilities are not always good predictors of human judgments (Chang et al., 2009) . Results are consistent over T \u2208 {100, 200, 300}.", |
|
"cite_spans": [ |
|
{ |
|
"start": 699, |
|
"end": 720, |
|
"text": "Wallach et al. (2009)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1168, |
|
"end": 1188, |
|
"text": "(Chang et al., 2009)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 101, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In section 4.2, we demonstrated that our topic coherence metric correlates with expert opinions of topic quality for standard LDA. The generalized P\u00f3lya urn model was therefore designed with the goal of directly optimizing that metric. It is possible, however, that optimizing for coherence directly could break the association between coherence metric and topic quality. We therefore repeated the expert-driven evaluation protocol described in section 3.1. We trained one standard LDA model and one generalized P\u00f3lya urn model, each with T = 200, and randomly shuffled the 400 resulting topics. The topics were then presented to the experts from NINDS, with no indication as to the identity of the model from which each topic came. As these evaluations are time consuming, the experts evaluated the only the first 200 topics, which consisted of 103 generalized P\u00f3lya urn topics and 97 LDA topics. AUC values predicting bad topics given coherence were 0.83 and 0.80, respectively. Coherence effectively predicts topic quality in both models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Although we were able to improve the average overall quality of topics and the average quality of the ten lowest-scoring topics, we found that the generalized P\u00f3lya urn model was less successful reducing the overall number of bad topics. Ignoring one \"unbalanced\" topic from each model, 16.5% of the LDA topics and 13.5% from the generalized P\u00f3lya urn model were marked as \"bad.\" While this result is an improvement, it is not significant at p = 0.05.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We have demonstrated the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "\u2022 There is a class of low-quality topics that cannot be detected using existing word-intrusion tests, but that can be identified reliably using a metric based on word co-occurrence statistics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "\u2022 It is possible to improve the coherence score of topics, both overall and for the ten worst, while retaining the ability to flag bad topics, all without requiring semi-supervised data or additional reference corpora. Although additional information may be useful, it is not necessary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "\u2022 Such models achieve better performance with substantially fewer Gibbs iterations than LDA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We believe that the most important challenges in future topic modeling research are improving the semantic quality of topics, particularly at the low end, and scaling to ever-larger data sets while ensuring high-quality topics. Our results provide critical insight into these problems. We found that it should be possible to construct unsupervised topic models that do not produce bad topics. We also found that Gibbs sampling mixes faster for models that use word cooccurrence information, suggesting that such methods may also be useful in guiding online stochastic variational inference (Hoffman et al., 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 590, |
|
"end": 612, |
|
"text": "(Hoffman et al., 2010)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "All evaluated models will be released publicly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported in part by the Center for Intelligent Information Retrieval, in part by the CIA, the NSA and the NSF under NSF grant # IIS-0326249, in part by NIH:HHSN271200900640P, and in part by NSF # number SBE-0965436. Any opinions, findings and conclusions or recommendations expressed in this material are the authors' and do not necessarily reflect those of the sponsor.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Topic significance ranking of LDA generative models", |
|
"authors": [ |
|
{ |
|
"first": "Loulwah", |
|
"middle": [], |
|
"last": "Alsumait", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Barbara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Gentle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlotta", |
|
"middle": [], |
|
"last": "Domeniconi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "ECML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Loulwah AlSumait, Daniel Barbara, James Gentle, and Carlotta Domeniconi. 2009. Topic significance rank- ing of LDA generative models. In ECML.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Incorporating domain knowledge into topic modeling via Dirichlet forest priors", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Andrzejewski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaojin", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Craven", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 26th Annual International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Andrzejewski, Xiaojin Zhu, and Mark Craven. 2009. Incorporating domain knowledge into topic modeling via Dirichlet forest priors. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 25-32.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Latent Dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993-1022, January.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Online inference of topics with latent Dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Canini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 12th International Conference on Artificial Intelligence and Statistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K.R. Canini, L. Shi, and T.L. Griffiths. 2009. Online inference of topics with latent Dirichlet allocation. In Proceedings of the 12th International Conference on Artificial Intelligence and Statistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Reading tea leaves: How humans interpret topic models", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Gerrish", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "22", |
|
"issue": "", |
|
"pages": "288--296", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Chang, Jordan Boyd-Graber, Chong Wang, Sean Gerrish, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Ad- vances in Neural Information Processing Systems 22, pages 288-296.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Word association norms, mutual information, and lexicography", |
|
"authors": [ |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Church", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Hanks", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Computational Linguistics", |
|
"volume": "6", |
|
"issue": "1", |
|
"pages": "22--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenneth Church and Patrick Hanks. 1990. Word asso- ciation norms, mutual information, and lexicography. Computational Linguistics, 6(1):22-29.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Accounting for burstiness in topic models", |
|
"authors": [ |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Doyle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Elkan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriel Doyle and Charles Elkan. 2009. Accounting for burstiness in topic models. In ICML.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Geman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Geman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "IEEE Transaction on Pattern Analysis and Machine Intelligence", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "721--741", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Geman and D. Geman. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transaction on Pattern Analysis and Machine Intelligence 6, pages 721-741.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Finding scientific topics", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Steyvers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the National Academy of Sciences", |
|
"volume": "101", |
|
"issue": "", |
|
"pages": "5228--5235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas L. Griffiths and Mark Steyvers. 2004. Find- ing scientific topics. Proceedings of the National Academy of Sciences, 101(suppl. 1):5228-5235.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Online learning for latent dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Hoffman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Bach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Hoffman, David Blei, and Francis Bach. 2010. Online learning for latent dirichlet allocation. In NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "P\u00f3lya Urn Models. Chapman & Hall/CRC Texts in Statistical Science", |
|
"authors": [ |
|
{ |
|
"first": "Hosan", |
|
"middle": [], |
|
"last": "Mahmoud", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hosan Mahmoud. 2008. P\u00f3lya Urn Models. Chapman & Hall/CRC Texts in Statistical Science.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Automatic labeling of multinomial topic models", |
|
"authors": [ |
|
{ |
|
"first": "Qiaozhu", |
|
"middle": [], |
|
"last": "Mei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuehua", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengxiang", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "490--499", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qiaozhu Mei, Xuehua Shen, and ChengXiang Zhai. 2007. Automatic labeling of multinomial topic mod- els. In Proceedings of the 13th ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining, pages 490-499.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Automatic evaluation of topic coherence", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Newman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jey", |
|
"middle": [ |
|
"Han" |
|
], |
|
"last": "Lau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Grieser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic evaluation of topic coher- ence. In Human Language Technologies: The Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A collapsed variational Bayesian inference algorithm for lat ent Dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "Yee Whye", |
|
"middle": [], |
|
"last": "Teh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dave", |
|
"middle": [], |
|
"last": "Newman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Welling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Advances in Neural Information Processing Systems 18", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yee Whye Teh, Dave Newman, and Max Welling. 2006. A collapsed variational Bayesian inference algorithm for lat ent Dirichlet allocation. In Advances in Neural Information Processing Systems 18.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Evaluation methods for topic models", |
|
"authors": [ |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "Wallach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iain", |
|
"middle": [], |
|
"last": "Murray", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mimno", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 26th Interational Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hanna Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. 2009. Evaluation methods for topic models. In Proceedings of the 26th Interational Con- ference on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "LDA-based document models for ad-hoc retrival", |
|
"authors": [ |
|
{ |
|
"first": "Xing", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 29th Annual International SIGIR Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xing Wei and Bruce Croft. 2006. LDA-based document models for ad-hoc retrival. In Proceedings of the 29th Annual International SIGIR Conference.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Topic size is a good indicator of quality; the new coherence metric is better. Top shows expert-rated topics ranked by topic size (AP 0.89, AUC 0.79), bottom shows same topics ranked by coherence (AP 0.94, AUC 0.87). Random jitter is added to the y-axis for clarity.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Top: results of the intruder selection task relative to two topic quality metrics. Bottom: marginal intruder accuracy frequencies of good and bad topics.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "P\u00f3lya urn topics (blue) have higher average coherence and converge much faster than LDA topics (red). The top plots show topic coherence (averaged over 15 runs) over 1000 iterations of Gibbs sampling. Error bars are not visible in this plot. The middle plot shows the average coherence of the 10 lowest scoring topics. The bottom plots show held-out log probability (in thousands) for the same models (three runs each of 5-fold cross-validation).", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF3": { |
|
"text": "Co-document frequency matrix for the top words in a low-quality topic (according to our coherence metric), shaded to highlight zeros. The diagonal (light gray) shows the overall document frequency for each word w. The column on the right is N w|t . Note that \"globin\" and \"erythroid\" do not co-occur with any of the aging-related words.", |
|
"content": "<table><tr><td>aging 487 lifespan 53 globin 0 age related 65 longevity 42 erythroid 0 age 51 sickle cell 0 human 138 hb 0</td><td>53 122 0 15 28 0 15 0 44 0</td><td>0 0 39 0 0 19 0 15 27 3</td><td>65 15 0 119 12 42 28 0 12 73 0 0 25 6 0 0 37 20 23 0 0 19 0 0 69 0 8 0 1 1</td><td>51 15 0 25 6 0 245 1 82 0</td><td>0 0 15 0 0 8 1 43 16 4347 157 91 138 0 914 44 0 205 27 3 200 37 0 160 20 1 159 23 1 110 82 0 103 16 2 93 2 5 15 73</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"text": "Data set statistics.", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |