|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:43:51.908617Z" |
|
}, |
|
"title": "ToModAPI: A Topic Modeling API to Train, Use and Compare Topic Models", |
|
"authors": [ |
|
{ |
|
"first": "Pasquale", |
|
"middle": [], |
|
"last": "Lisena", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "EURECOM", |
|
"location": { |
|
"settlement": "Sophia Antipolis", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Ismail", |
|
"middle": [], |
|
"last": "Harrando", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "EURECOM", |
|
"location": { |
|
"settlement": "Sophia Antipolis", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Oussama", |
|
"middle": [], |
|
"last": "Kandakji", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "EURECOM", |
|
"location": { |
|
"settlement": "Sophia Antipolis", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Rapha\u00ebl", |
|
"middle": [], |
|
"last": "Troncy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "EURECOM", |
|
"location": { |
|
"settlement": "Sophia Antipolis", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "From LDA to neural models, different topic modeling approaches have been proposed in the literature. However, their suitability and performance is not easy to compare, particularly when the algorithms are being used in the wild on heterogeneous datasets. In this paper, we introduce ToModAPI (TOpic MOdeling API), a wrapper library to easily train, evaluate and infer using different topic modeling algorithms through a unified interface. The library is extensible and can be used in Python environments or through a Web API.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "From LDA to neural models, different topic modeling approaches have been proposed in the literature. However, their suitability and performance is not easy to compare, particularly when the algorithms are being used in the wild on heterogeneous datasets. In this paper, we introduce ToModAPI (TOpic MOdeling API), a wrapper library to easily train, evaluate and infer using different topic modeling algorithms through a unified interface. The library is extensible and can be used in Python environments or through a Web API.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The analysis of massive volumes of text is an extremely expensive activity when it relies on not-scalable manual approaches or crowdsourcing strategies. Relevant tasks typically include textual document classification, document clustering, keywords and named entities extraction, language or sequence modeling, etc. In the literature, topic modeling and topic extraction, which enable to automatically recognise the main subject (or topic) in a text, have attracted a lot of interest. The predicted topics can be used for clustering documents, for improving named entity extraction (Newman et al., 2006) , and for automatic recommendation of related documents (Luostarinen and Kohonen, 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 582, |
|
"end": 603, |
|
"text": "(Newman et al., 2006)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 660, |
|
"end": 691, |
|
"text": "(Luostarinen and Kohonen, 2013)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Several topic modeling algorithms have been proposed. However, we argue that it is hard to compare and to choose the most appropriate one given a particular goal. Furthermore, the algorithms are often evaluated on different datasets and different scoring metrics are used. In this work, we have selected some of the most popular topic modeling algorithms from the state of the art in order to integrate them in a common platform, which homogenises the interface methods and the evaluation metrics. The result is ToModAPI 1 which allows to dynamically train, evaluate, perform inference on different models, and extract information from these models as well, making it possible to compare them using different metrics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remaining of this paper is organised as follows. In Section 2, we describe some related works and we detail some state-of-the-art topic modeling techniques. In Section 3, we provide an overview of the evaluation metrics usually used. We introduce ToModAPI in Section 4. We then describe some datasets (Section 5) that are used in training to perform a comparison of the topic models (Section 6). Finally, we give some conclusions and outline future work in Section 7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Aside from a few exceptions (Blei and McAuliffe, 2007) , most topic modeling works propose or apply unsupervised methods. Instead of learning the mapping to a pre-defined set of topics (or labels), the goal of these methods consists in assigning training documents to N unknown topics, where N is a required parameter. Usually, these models compute two distributions: a Document-Topic distribution which represents the probability of each document to belong to each topic, and a Topic-Word distribution which represents the probability of each topic to be represented by each word present in the documents. These distributions are used to predict (or infer) the topic of unseen documents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 54, |
|
"text": "(Blei and McAuliffe, 2007)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Latent Dirichlet Allocation (LDA) is a unsupervised statistical modeling approach (Blei et al., 2003) that considers each document as a bag of words and creates a randomly assigned documenttopic and word-topic distribution. Iterating over words in each document, the distributions are updated according to the probability that a document or a word belongs to a certain topic. The Hierarchical Dirichlet Process (HDP) model (Teh et al., 2006) is another statistical approach for clustering grouped data such as text documents. It considers each document as a group of words belonging with a certain probability to one or multiple components of a mixture model, i.e. the topics. Both the probability measure for each document (distribution over the topics) and the base probability measure -which allows the sharing of clusters across documents -are drawn from Dirichlet Processes (Ferguson, 1973) . Differently from many other topic models, HDP infers the number of topics automatically.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 101, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 441, |
|
"text": "(Teh et al., 2006)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 879, |
|
"end": 895, |
|
"text": "(Ferguson, 1973)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Gibbs Sampling for a DMM (GSDMM) applies the Dirichlet Multinomial Mixture model for short text clustering (Yin and Wang, 2014) . This algorithm works computing iteratively the probability that a document join a specific one of the N available clusters. This probability consist in two parts: 1) a part that promotes the clusters with more documents; 2) a part that advantages the movement of a document towards similar clusters, i.e. which contains a similar word-set. Those two parts are controlled by the parameters \u03b1 and \u03b2. The simplicity of GSDMM provides a fast convergence after some iterations. This algorithm consider the given number of clusters given as an upper bound and it might end up with a lower number of topics. From another perspective, it is somehow able to infer the optimal number of topics, given the upper bound.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 127, |
|
"text": "(Yin and Wang, 2014)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Word vectors such as word2vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014) can help to enhance topic-word representations, as achieved by the Latent Feature Topic Models (LFTM) (Nguyen et al., 2015). One of the LFTM algorithms is Latent Feature LDA (LF-LDA), which extends the original LDA algorithm by enriching the topic-word distribution with a latent feature component composed of pre-trained word vectors. In the same vein, the Paragraph Vector Topic Model (PVTM) (Lenz and Winker, 2020) uses doc2vec (Le and Mikolov, 2014) to generate document-level representations in a common embedding space. Then, it fits a Gaussian Mixture Model to cluster all the similar documents into a predetermined number of topics -i.e. the number of GMM components.", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 52, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 62, |
|
"end": 87, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 519, |
|
"end": 541, |
|
"text": "(Le and Mikolov, 2014)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Topic modeling can also be performed via linearalgebraic methods. Starting from the the high-dimensional term-document matrix, multiple approaches can be used to lower its dimensions. Then, we consider every dimension in the lower-rank matrix as a latent topic. A straightforward application of this principle is the Latent Semantic Indexing model (LSI) (Deerwester et al., 1990) , which uses Singular Value Decomposition as a means to approximate the term-document matrix (potentially mediated by TF-IDF) into one with less rowseach one representing a latent semantic dimension in the data -and preserving the similarity structure among columns (terms). Non-negative Matrix Factorisation (NMF) (Paatero and Tapper, 1994) exploits the fact that the term-document matrix is non-negative, thus producing not only a denser representation of the term-document distribution through the matrix factorisation but guaranteeing that the membership of a document to each topic is represented by a positive coefficient.", |
|
"cite_spans": [ |
|
{ |
|
"start": 354, |
|
"end": 379, |
|
"text": "(Deerwester et al., 1990)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 695, |
|
"end": 721, |
|
"text": "(Paatero and Tapper, 1994)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In recent years, neural network approaches for topic modeling have gained popularity giving birth to a family of Neural Topic Models (NTM) (Cao et al., 2015) . Among those, doc2topic (D2T) 2 uses a neural network which separately computes N-dimensional embedding vectors for words and documents -with N equal to the number of topics, before computing the final output using a sigmoid activation. The distributions topic-word and document-topic are obtained by getting the final weights on the two embedding layers. Another neural topic model, the Contextualized Topic Model (CTM) (Bianchi et al., 2020) uses Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) -a neural transformer language model designed to compute sentences representations efficiently -to generate a fixed-size embedding for each document to contextualise the usual Bag of Words representation. CTM enhances the Neural-ProdLDA (Srivastava and Sutton, 2017) architecture with this contextual representation to significantly improve the coherence of the generated topics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 157, |
|
"text": "(Cao et al., 2015)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 580, |
|
"end": 602, |
|
"text": "(Bianchi et al., 2020)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 896, |
|
"end": 925, |
|
"text": "(Srivastava and Sutton, 2017)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Previous works have tried to compare different topic models. A review of statistical topic modeling techniques is included in Newman et al. (2006) . A comparison and evaluation of LDA and NMF using the coherence metric is proposed by O'Callaghan et al. (2015). Among the libraries for performing topic modeling, Gensim is undoubtedly the most known one, providing implementations of several tools for the NLP field (\u0158eh\u016f\u0159ek and Sojka, 2010). Focusing on topic modeling for short texts, STMM includes 11 different topic models, which can be trained and evaluated through command line (Qiang et al., 2019) . The Topic Modelling Open Source Tool 3 exposes a web graphical user interface for training and evaluating topic models, LDA being the only representative so far. The Promoss Topic Modelling Toolbox 4 provides a unified Java command line interface for computing a topic model distribution using LDA or the Hierarchical Multi-Dirichlet Process Topic Model (HMDP) (Kling, 2016) . However, it does not allow to apply the computed model on unseen documents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 146, |
|
"text": "Newman et al. (2006)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 583, |
|
"end": 603, |
|
"text": "(Qiang et al., 2019)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 967, |
|
"end": 980, |
|
"text": "(Kling, 2016)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The evaluation of machine learning techniques often relies on accuracy scores computed comparing predicted results against a ground truth. In the case of unsupervised techniques like topic modeling, the ground truth is not always available. For this reason, in the literature, we can find:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metrics", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 metrics which enable to evaluate a topic model independently from a ground truth, among which, coherence measures are the most popular ones for topic modeling (R\u00f6der et al., 2015; O'Callaghan et al., 2015; Qiang et al., 2019) ;", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 181, |
|
"text": "(R\u00f6der et al., 2015;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 207, |
|
"text": "O'Callaghan et al., 2015;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 227, |
|
"text": "Qiang et al., 2019)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metrics", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 metrics that measure the quality of a model's predictions by comparing its resulting clusters against ground truth labels, in this case a topic label for each document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metrics", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The coherence metrics rely on the joint probability P (w i , w j ) of two words w i and w j that is computed by counting the number of documents in which those words occur together divided by the total number of documents in the corpus. The documents are fragmented using sliding windows of a given length, and the probability is given by the number of fragments including both w i and w j divided by the total number of fragments. This probability can be expressed through the Pointwise Mutual Information (PMI), defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P M I(w i , w j ) = log P (w i , w j ) + P (w i ) \u2022 P (w j )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "3 https://github.com/opeyemibami/ Topic-Modelling-Open-Source-Tool 4 https://github.com/gesiscss/promoss A small value is chosen for , in order to avoid computing the logarithm of 0. Different metrics based on PMI have been introduced in the literature, differing in the strategies applied for token segmentation, probability estimation, confirmation measure, and aggregation. The UCI coherence (R\u00f6der et al., 2015) averages the PMI computed between pairs of topics, according to:", |
|
"cite_spans": [ |
|
{ |
|
"start": 395, |
|
"end": 415, |
|
"text": "(R\u00f6der et al., 2015)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "C U CI = 2 N \u2022(N \u22121) N \u22121 i=1 N j=i+1 P M I(w i , w j ) (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The UMASS coherence (R\u00f6der et al., 2015) relies instead on a differently computed joint probability:", |
|
"cite_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 40, |
|
"text": "(R\u00f6der et al., 2015)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "C U M ASS = 2 N \u2022(N \u22121) N \u22121 i=1 N j=i+1 log P (w i ,w j )+ P (w j )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The Normalized Pointwise Mutual Information (NPMI) (Chiarcos et al., 2009) applies the PMI in a confirmation measure for defining the association between two words:", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 74, |
|
"text": "(Chiarcos et al., 2009)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "N P M I(w i , w j ) = P M I(w i , w j ) \u2212log(P (w i , w j ) + )", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "NPMI values go from -1 (never co-occurring words) to +1 (always co-occurring), while the value of 0 suggests complete independence. This measure can be applied also to word sets. This is made possible using a vector representation in which each feature consists in the NPMI computed between w i and a word in the corpus W , according to the formula:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u2212 \u2192 v (w i ) = N P M I(w i , w j )|w j \u2208 W", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In ToModAPI, we include the following four metrics 5 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 C N P M I applies NPMI as in Eqn (4) to couples of words, computing their joint probabilities using sliding windows;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 C V compute the cosine similarity of the vectors -as defined in Eqn (5) -related to each word of the topic. The NPMI is computed on sliding windows;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 C U CI as in Eqn (2);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 C U M ASS as in Eqn (3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Additionally, we include a Word Embeddingsbased Coherence as introduced by Fang et al. (2016) . This metric relies on pre-trained word embeddings such as GloVe or word2vec and evaluate the topic quality using a similarity metric between its top words. In other words, a high mutual embedding similarity between a model's top words reflects its underlying semantic coherence. In the context of this paper, we will use the sum of mutual cosine similarity computed on the Glove vectors 6 of the top N = 10 words of each topic:", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 93, |
|
"text": "Fang et al. (2016)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "C W E = 2 N \u2022(N \u22121) N \u22121 i=1 N j=i+1 cos(v i , v j ) (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where v i and v j are the GloVe vectors of the words w i and w j .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "All metrics aggregate the different values at topic level using the arithmetic mean, in order to provide a coherence value for the whole model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The most used metric that relies on a ground truth is the Purity, defined as the fraction of documents in each cluster with a correct prediction (Hajjem and Latiri, 2017) . A prediction is considered correct if the original label coincides with the original label of the majority of documents falling in the same topic prediction. Given L the set of original labels and T the set of predictions:", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 170, |
|
"text": "(Hajjem and Latiri, 2017)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metrics which relies on a ground truth", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "P urity(T, L) = 1 |T | i\u2208T max j\u2208L |T j \u2229 L j | (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metrics which relies on a ground truth", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In addition, we include in the API the following metrics used in the literature for evaluating the quality of classification or clustering algorithms, applied to the topic modeling task:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metrics which relies on a ground truth", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1. Homogeneity: a topic model output is considered homogeneous if all documents assigned to each topic belong to the same ground-truth label (Rosenberg and Hirschberg, 2007) ; 2. Completeness: a topic model output is considered complete if all documents from one ground-truth label fall into the same topic (Rosenberg and Hirschberg, 2007) ;", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 173, |
|
"text": "(Rosenberg and Hirschberg, 2007)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 307, |
|
"end": 339, |
|
"text": "(Rosenberg and Hirschberg, 2007)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metrics which relies on a ground truth", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "3. V-Measure: the harmonic mean of Homogeneity and Completeness. A V-Measure of 1.0 corresponds to a perfect alignment between topic model outputs and ground truth labels (Rosenberg and Hirschberg, 2007) ; 4. Normalized Mutual Information (NMI) is the ratio between the mutual information between two distributions -in our case, the prediction set and the ground truth -normalised through an aggregation of those distributions' entropies (Lancichinetti et al., 2009) . The aggregation can be realised by selecting the minimum/maximum or applying the geometric/arithmetic mean. In the case of arithmetic mean, NMI is equivalent to the V-Measure.", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 203, |
|
"text": "(Rosenberg and Hirschberg, 2007)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 438, |
|
"end": 466, |
|
"text": "(Lancichinetti et al., 2009)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metrics which relies on a ground truth", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For these metrics, we use the implementations provided by scikit-learn (Pedregosa et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 95, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metrics which relies on a ground truth", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We now introduce ToModAPI, a Python library which harmonises the interfaces of topic modeling algorithms. So far, 9 topic modeling algorithms have been integrated in the library (Table 1) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 187, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ToModAPI: a Topic Modeling API", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For each algorithm, the following interface methods are exposed:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ToModAPI: a Topic Modeling API", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 train which requires in input the path of a dataset and an algorithm-specific set of training parameters;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ToModAPI: a Topic Modeling API", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 topics which returns the list of trained topics and, for each of them, the 10 most representative words. Where available, the weights of those words in representing the topic are given;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ToModAPI: a Topic Modeling API", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 topic which returns the information (representative words and weights) about a single topic;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ToModAPI: a Topic Modeling API", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 predict which performs the topic inference on a given (unseen) text;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ToModAPI: a Topic Modeling API", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 get training predictions which provides the final predictions made on the training corpus. Where possible, this method is not performing a new inference on the text, but returns the predictions obtained during the training;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ToModAPI: a Topic Modeling API", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 coherence which computes the chosen coherence metric -among the ones described in Section 3.1 -on a given dataset;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ToModAPI: a Topic Modeling API", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 evaluate which evaluate the model predictions against a given ground truth, using the metrics described in Section 3.2. The structure of the library, which relies on class inheritance, is easy to extend with the addition of new models. In addition to allowing the import in any Python environment and use the library offline, it provides the possibility of automatically build a web API, in order to access to the different methods through HTTP calls. Table 2 provides a comparison between the ToModAPI, Gensim and STMM. Given that we wrap some Gensim models and methods (i.e. for coherence computation), some similarities between it and our work can be observed.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 454, |
|
"end": 461, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ToModAPI: a Topic Modeling API", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The software is distributed under an open source license 7 . A demo of the web API is available at http://hyperted.eurecom.fr/topic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ToModAPI: a Topic Modeling API", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Together with the library, we provide pre-trained models trained on two different datasets having different characteristics (20NG and AFP). A common pre-processing is performed on the datasets before training, consisting of:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and pre-trained models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Removing numbers, which, in general, do not contribute to the broad semantics;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and pre-trained models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Removing the punctuation and lower-casing;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and pre-trained models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Removing the standard English stop words;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and pre-trained models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Lemmatisation using Wordnet, in order to deal with inflected forms as a single semantic item;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and pre-trained models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Ignoring words with 2 letters or less. In facts, they are mainly residuals from removing punctuation -e.g. stripping punctuation from people's produces people and s.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and pre-trained models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The same pre-processing is also applied to the text before topic prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and pre-trained models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "7 https://github.com/D2KLab/ToModAPI", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and pre-trained models", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The 20 NewsGroups collection (20NG) (Lang, 1995) is a popular dataset used for text classification and clustering. It is composed of English news documents, distributed fairly equally across 20 different categories according to the subject of the text. We use a reduced version of this dataset 8 , which excludes all the documents composed by the sole header while preserving an even partition over the 20 categories. This reduced dataset contains 11,314 documents. We pre-process the dataset in order to remove irrelevant metadata -consisting of email addresses and news feed identifiers -keeping just the textual content. The average number of words per document is 142.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 48, |
|
"text": "(Lang, 1995)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "20 NewsGroups", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The Agence France Presse (AFP) publishes daily up to 2000 news articles in 5 different languages 9 , together with some metadata represented in the NewsML XML-based format. Each document is categorised using one or more subject codes, taken from the IPTC NewsCode Concept vocabulary 10 . In case of multiple subjects, they are ordered by relevance. In this work, we only consider the first level of the hierarchy of the IPTC subject codes. We extracted a dataset containing 125,516 news documents in English and corresponding to the production of AFP for the year 2019, with 237 words per document on average. Table 3 summarizes the number of documents for each topic in those two datasets. In AFP, a single document can be assigned to multiple subject, so we take each assignment into account. The two library Gensim STMM ToModAPI algorithms 8: LDA, LDA Sequence, LDA multicore, NMF, LSI, HDP, Author-topic model, DTM 11: LDA, LFTM, DMM, BTM, WNTM, PTM, SATM, ETM, GPU-DMM, GPU-PDMM, LF-DMM 9: LDA, LFTM, D2T, GSDMM, NMF, HDP, LSI, PVTM, CTM language Python Java Python focus general short text general training inference corpus predictions (by inferencing the corpus)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 610, |
|
"end": 617, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 818, |
|
"end": 1059, |
|
"text": "STMM ToModAPI algorithms 8: LDA, LDA Sequence, LDA multicore, NMF, LSI, HDP, Author-topic model, DTM 11: LDA, LFTM, DMM, BTM, WNTM, PTM, SATM, ETM, GPU-DMM, GPU-PDMM, LF-DMM 9: LDA, LFTM, D2T, GSDMM, NMF, HDP, LSI, PVTM, CTM", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Agence France Presse", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "coherence metrics c umass , c v , c uci , c npmi c umass c umass , c v , c uci , c npmi", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Agence France Presse", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Evaluation with Ground Truth -purity, NMI purity, homogeneity, completeness, v-measure, NMI usage import in script command line import in script, web API ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Agence France Presse", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We also describe the Wikipedia corpus (Wiki) 11 , which is a readily extracted and organised snapshot from 2013 that includes pages with at least 20 page views in English. This corpus has been used in other works, for example, for computing word embeddings (Leimeister and Wilson, 2018) . The corpus is distributed with some pre-processing already applied, like lower-casing and punctuation 11 https://storage.googleapis.com/ lateral-datadumps/wikipedia_utf8_ filtered_20pageviews.csv.gz stripping. However, we performed additional operations such as lemmatisation, stop-word and small word (2 characters or less) removal. The dataset consists of around 463k documents with 498M words. This corpus will not be used for training but only for evaluating the models (trained on 20NG or AFP) in order to reflect on the generalisation of the topics models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 257, |
|
"end": 286, |
|
"text": "(Leimeister and Wilson, 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Wikipedia Corpus", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We empirically evaluate the performances of the topic modeling algorithms described in Section 2 on the two datasets presented in Section 5 using the metrics detailed in Section 3. For each algorithm, we trained two different models, respectively on 20NG and AFP corpus. The number of topicswhen required by the algorithm -has been set to 20 and 7 when training on 20NG and AFP, respectively, in order to mimic the original division in class labels of the corpora (except for GSDMM and HDP which infer the optimal number of topics). Each model trained on either 20NG or AFP is tested against the same dataset and the Wikipedia dataset to compute each metric. Table 4 shows the average coherence scores of the topics computed on the 20NG dataset, together with the standard deviation, while the results of Table 5 refer to models computed on the AFP dataset. The results differ depending on the studied metric and the evaluation dataset. LFTM generalises better when evaluated against the Wikipedia corpus, probably thanks to the usage of pre-trained word vectors on large corpora. Overall, LDA has the best results on all metrics, always being among the top ones in terms of coherence. When trained on AFP, all topic models benefit of a bigger dataset; this results in generally higher scores and in different algorithms maximising specific metrics. We also consider the time taken by the different techniques for different tasks like training and getting prediction (Table 6 ). The results have been collected selecting the best of 3 different calls. The inference time has been computed using the models trained on the 20NG dataset, on a small sentence of 18 words 12 . The table shows LDA leading in training, while the longest execution time belongs to LFTM. The inference time for all models is in the order of few seconds or even less than 1 for GSDMM, HDP, LSI and PVTM. The manipulation of BERT embeddings makes CTM inference more time-consuming. The inference timing for D2T is not computed because its implementation is not available yet.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 659, |
|
"end": 666, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 805, |
|
"end": 812, |
|
"text": "Table 5", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 1467, |
|
"end": 1475, |
|
"text": "(Table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "C v C N P M I C U M ASS C U", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this paper, we introduced ToModAPI, a library and a Web API to easily train, test and evaluate topic models. 9 algorithms are already included in the library, while new ones will be added in future. Other evaluation metrics for topic modeling have been proposed (Wallach et al., 2009) and will be included in the API for enabling a complete evaluation. Among these, metrics based on word embeddings are gaining particular attention (Ding et al., 2018) . For further exploiting the advantage of having a common interface, we will study ways to automatically tune each model's hyper-parameters such as the right number of topics, find an appropriate label for the computed topics, optimise and use the models in real world applications. Finally, future work includes a deeper comparison of the models trained on different datasets. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 287, |
|
"text": "(Wallach et al., 2009)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 454, |
|
"text": "(Ding et al., 2018)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "ToModAPI: TOpic MODeling API", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/sronnqvist/ doc2topic", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the implementation of these metrics as provided in Gensim. The window size is kept at the default values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use a Glove model pre-trained on Wikipedia 2014 + Gigaword 5, available at https://nlp.stanford. edu/projects/glove/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/selva86/datasets/ 9 The catalogue can be explored at http://medialab. afp.com/afp4w/ 10 http://cv.iptc.org/newscodes/ subjectcode/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\"Climate change is a global environmental issue that is affecting the lands, the oceans, the animals, and humans\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work has been partially supported by the French National Research Agency (ANR) within the ASRAEL (grant number ANR-15-CE23-0018) and ANTRACT (grant number ANR-17-CE38-0010) projects, and by the European Union's Horizon 2020 research and innovation program within the MeMAD (grant agreement No. 780069) and SILKNOW (grant agreement No. 769504) projects.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Pre-training is a hot topic: Contextualized document embeddings improve topic coherence", |
|
"authors": [ |
|
{ |
|
"first": "Federico", |
|
"middle": [], |
|
"last": "Bianchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silvia", |
|
"middle": [], |
|
"last": "Terragni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Federico Bianchi, Silvia Terragni, and Dirk Hovy. 2020. Pre-training is a hot topic: Contextual- ized document embeddings improve topic coher- ence. ArXiv.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Supervised Topic Models", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mcauliffe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "20 th International Conference on Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "121--128", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei and Jon D. McAuliffe. 2007. Supervised Topic Models. In 20 th International Conference on Neural Information Processing Systems (NIPS), pages 121--128.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Latent Dirichlet Allocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Ma- chine Learning Research, 3:993--1022.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A Novel Neural Topic Model and Its Supervised Extension", |
|
"authors": [ |
|
{ |
|
"first": "Ziqiang", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sujian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenjie", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ziqiang Cao, Sujian Li, Yang Liu, Wenjie Li, and Heng Ji. 2015. A Novel Neural Topic Model and Its Super- vised Extension. In AAAI Conference on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Von der Form zur Bedeutung: Texte automatisch verarbeiten -From Form to Meaning: Processing Texts Automatically", |
|
"authors": [ |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Chiarcos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Eckart De Castilho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manfred", |
|
"middle": [], |
|
"last": "Stede", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christian Chiarcos, Richard Eckart de Castilho, and Manfred Stede. 2009. Von der Form zur Bedeutung: Texte automatisch verarbeiten -From Form to Mean- ing: Processing Texts Automatically. Narr Francke Attempto Verlag GmbH + Co. KG.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Journal of the American society for information science", |
|
"authors": [ |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Deerwester", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Susan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Dumais", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Furnas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Landauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Harshman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "391--407", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Scott Deerwester, Susan T Dumais, George W Fur- nas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Jour- nal of the American society for information science, 41(6):391-407.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Coherence-Aware Neural Topic Modeling", |
|
"authors": [ |
|
{ |
|
"first": "Ran", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramesh", |
|
"middle": [], |
|
"last": "Nallapati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Xiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "830--836", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ran Ding, Ramesh Nallapati, and Bing Xiang. 2018. Coherence-Aware Neural Topic Modeling. In Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 830-836, Brussels, Bel- gium.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Using Word Embedding to Evaluate the Coherence of Topics from Twitter Data", |
|
"authors": [ |
|
{ |
|
"first": "Anjie", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Craig", |
|
"middle": [], |
|
"last": "Macdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iadh", |
|
"middle": [], |
|
"last": "Ounis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Habel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "39 th International ACM SIGIR Conference on Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1057--1060", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anjie Fang, Craig Macdonald, Iadh Ounis, and Philip Habel. 2016. Using Word Embedding to Evaluate the Coherence of Topics from Twitter Data. In 39 th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1057--1060.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A bayesian analysis of some nonparametric problems", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Ferguson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "Annals of Statistics", |
|
"volume": "1", |
|
"issue": "2", |
|
"pages": "209--230", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas S. Ferguson. 1973. A bayesian analysis of some nonparametric problems. Annals of Statistics, 1(2):209-230.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Combining IR and LDA Topic Modeling for Filtering Microblogs", |
|
"authors": [ |
|
{ |
|
"first": "Malek", |
|
"middle": [], |
|
"last": "Hajjem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chiraz", |
|
"middle": [], |
|
"last": "Latiri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "21 st International Conference on Knowledge-Based and Intelligent Information & Engineering Systems (KES)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "761--770", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Malek Hajjem and Chiraz Latiri. 2017. Combining IR and LDA Topic Modeling for Filtering Microblogs. In 21 st International Conference on Knowledge- Based and Intelligent Information & Engineering Systems (KES), pages 761-770, Marseille, France.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Probabilistic models for context in social media. doctoral thesis", |
|
"authors": [ |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Kling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christoph Kling. 2016. Probabilistic models for con- text in social media. doctoral thesis, Universit\u00e4t Koblenz-Landau, Universit\u00e4tsbibliothek.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Detecting the overlapping and hierarchical community structure in complex networks", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Lancichinetti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Santo", |
|
"middle": [], |
|
"last": "Fortunato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00e1nos", |
|
"middle": [], |
|
"last": "Kert\u00e9sz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "New Journal of Physics", |
|
"volume": "", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrea Lancichinetti, Santo Fortunato, and J\u00e1nos Kert\u00e9sz. 2009. Detecting the overlapping and hier- archical community structure in complex networks. New Journal of Physics, 11(3).", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "NewsWeeder: Learning to Filter Netnews", |
|
"authors": [ |
|
{ |
|
"first": "Ken", |
|
"middle": [], |
|
"last": "Lang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "20 th International Conference on Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "331--339", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ken Lang. 1995. NewsWeeder: Learning to Filter Net- news. In 20 th International Conference on Machine Learning (ICML), pages 331-339.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Distributed representations of sentences and documents", |
|
"authors": [ |
|
{ |
|
"first": "Quoc", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "31 st International Conference on Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1188--1196", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In 31 st International Conference on Machine Learning (ICML), pages 1188-1196, Bejing, China.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Skip-gram word embeddings in hyperbolic space", |
|
"authors": [ |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Leimeister", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Wilson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Arxiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthias Leimeister and Benjamin J. Wilson. 2018. Skip-gram word embeddings in hyperbolic space. Arxiv.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Measuring the diffusion of innovations with paragraph vector topic models", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Lenz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Winker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "PLOS ONE", |
|
"volume": "15", |
|
"issue": "", |
|
"pages": "1--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Lenz and Peter Winker. 2020. Measuring the diffusion of innovations with paragraph vector topic models. PLOS ONE, 15:1-18.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Using Topic Models in Content-Based News Recommender Systems", |
|
"authors": [ |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Luostarinen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oskar", |
|
"middle": [], |
|
"last": "Kohonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "19 th Nordic Conference of Computational Linguistics (NODALIDA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tapio Luostarinen and Oskar Kohonen. 2013. Us- ing Topic Models in Content-Based News Recom- mender Systems. In 19 th Nordic Conference of Computational Linguistics (NODALIDA).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "MALLET: A Machine Learning for Language Toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Andrew Kachites", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Kachites McCallum. 2002. MALLET: A Ma- chine Learning for Language Toolkit.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Distributed Representations of Words and Phrases and Their Compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "26 th International Conference on Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed Repre- sentations of Words and Phrases and Their Com- positionality. In 26 th International Conference on Neural Information Processing Systems (NIPS), vol- ume 2, pages 3111-3119, Lake Tahoe, NV, USA.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Analyzing Entities and Topics in News Articles Using Statistical Topic Models", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Newman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chaitanya", |
|
"middle": [], |
|
"last": "Chemudugunta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Padhraic", |
|
"middle": [], |
|
"last": "Smyth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steyvers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Intelligence and Security Informatics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "93--104", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Newman, Chaitanya Chemudugunta, Padhraic Smyth, and Mark Steyvers. 2006. Analyzing Enti- ties and Topics in News Articles Using Statistical Topic Models. In Intelligence and Security Informat- ics, pages 93-104.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Improving Topic Models with Latent Feature Word Representations", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Dat Quoc Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lan", |
|
"middle": [], |
|
"last": "Billingsley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "299--313", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dat Quoc Nguyen, Richard Billingsley, Lan Du, and Mark Johnson. 2015. Improving Topic Models with Latent Feature Word Representations. Transactions of the Association for Computational Linguistics, 3:299-313.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "An analysis of the coherence of descriptors in topic modeling", |
|
"authors": [ |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Derek O'callaghan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Greene", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P\u00e1draig", |
|
"middle": [], |
|
"last": "Carthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cunningham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Expert Systems with Applications", |
|
"volume": "42", |
|
"issue": "13", |
|
"pages": "5645--5657", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Derek O'Callaghan, Derek Greene, Joe Carthy, and P\u00e1draig Cunningham. 2015. An analysis of the co- herence of descriptors in topic modeling. Expert Sys- tems with Applications, 42(13):5645-5657.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values", |
|
"authors": [ |
|
{ |
|
"first": "Pentti", |
|
"middle": [], |
|
"last": "Paatero", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Unto", |
|
"middle": [], |
|
"last": "Tapper", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Environmetrics", |
|
"volume": "5", |
|
"issue": "2", |
|
"pages": "111--126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pentti Paatero and Unto Tapper. 1994. Positive matrix factorization: A non-negative factor model with opti- mal utilization of error estimates of data values. En- vironmetrics, 5(2):111-126.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Scikit-learn: Machine Learning in Python", |
|
"authors": [ |
|
{ |
|
"first": "Fabian", |
|
"middle": [], |
|
"last": "Pedregosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ga\u00ebl", |
|
"middle": [], |
|
"last": "Varoquaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Gramfort", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bertrand", |
|
"middle": [], |
|
"last": "Thirion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Grisel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathieu", |
|
"middle": [], |
|
"last": "Blondel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Prettenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Dubourg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jake", |
|
"middle": [], |
|
"last": "Vanderplas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Passos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Cournapeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthieu", |
|
"middle": [], |
|
"last": "Brucher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthieu", |
|
"middle": [], |
|
"last": "Perrot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Duchesnay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexan- dre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12:2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "GloVe: Global Vectors for Word Representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Short Text Topic Modeling Techniques, Applications, and Performance: A Survey", |
|
"authors": [ |
|
{ |
|
"first": "Jipeng", |
|
"middle": [], |
|
"last": "Qiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhenyu", |
|
"middle": [], |
|
"last": "Qian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yun", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yunhao", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xindong", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jipeng Qiang, Zhenyu Qian, Yun Li, Yunhao Yuan, and Xindong Wu. 2019. Short Text Topic Model- ing Techniques, Applications, and Performance: A Survey. Arxiv.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Software Framework for Topic Modelling with Large Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Radim\u0159eh\u016f\u0159ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sojka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "LREC Workshop on New Challenges for NLP Frameworks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In LREC Workshop on New Challenges for NLP Frame- works, pages 45-50, Valletta, Malta.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", |
|
"authors": [ |
|
{ |
|
"first": "Nils", |
|
"middle": [], |
|
"last": "Reimers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3982--3992", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3982-3992, Hong Kong, China.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Exploring the space of topic coherence measures", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "R\u00f6der", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Both", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Hinneburg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "8 th ACM International Conference on Web Search and Data Mining (WSDM)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "399--408", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael R\u00f6der, Andreas Both, and Alexander Hinneb- urg. 2015. Exploring the space of topic coherence measures. In 8 th ACM International Conference on Web Search and Data Mining (WSDM), pages 399- -408.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "V-Measure: A Conditional Entropy-Based External Cluster Evaluation Measure", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Rosenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hirschberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "410--420", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Rosenberg and Julia Hirschberg. 2007. V- Measure: A Conditional Entropy-Based External Cluster Evaluation Measure. In Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning (EMNLP-CoNLL), pages 410-420, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Autoencoding variational inference for topic models", |
|
"authors": [ |
|
{ |
|
"first": "Akash", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Sutton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Akash Srivastava and Charles Sutton. 2017. Autoen- coding variational inference for topic models. In International Conference on Learning Representa- tions (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Hierarchical dirichlet processes", |
|
"authors": [ |
|
{ |
|
"first": "Yee Whye", |
|
"middle": [], |
|
"last": "Teh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Beal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Journal of the American Statistical Association", |
|
"volume": "101", |
|
"issue": "476", |
|
"pages": "1566--1581", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yee Whye Teh, Michael I Jordan, Matthew J Beal, and David M Blei. 2006. Hierarchical dirichlet pro- cesses. Journal of the American Statistical Associa- tion, 101(476):1566-1581.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Evaluation methods for topic models", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hanna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iain", |
|
"middle": [], |
|
"last": "Wallach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Murray", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mimno", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "26 th Annual International Conference on Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1105--1112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hanna M. Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. 2009. Evaluation methods for topic models. In 26 th Annual International Confer- ence on Machine Learning (ICML), pages 1105-- 1112.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "A Dirichlet Multinomial Mixture Model-Based Approach for Short Text Clustering", |
|
"authors": [ |
|
{ |
|
"first": "Jianhua", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianyong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "20 th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "233--242", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jianhua Yin and Jianyong Wang. 2014. A Dirich- let Multinomial Mixture Model-Based Approach for Short Text Clustering. In 20 th ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining (KDD), pages 233--242.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Algorithm</td><td colspan=\"2\">Acronym Source implementation</td></tr><tr><td>Latent Dirichlet Allocation</td><td>LDA</td><td>http://mallet.cs.umass.edu/ (McCallum, 2002) (JAVA)</td></tr><tr><td>Latent Feature Topic Models</td><td>LFTM</td><td>https://github.com/datquocnguyen/LFTM (JAVA)</td></tr><tr><td>Doc2Topic</td><td>D2T</td><td>https://github.com/sronnqvist/doc2topic</td></tr><tr><td>Latent Semantic Indexing</td><td>LSI</td><td>https://radimrehurek.com/gensim/models/lsimodel.html</td></tr><tr><td>Paragraph Vector Topic Model</td><td>PVTM</td><td>https://github.com/davidlenz/pvtm</td></tr><tr><td>Context Topic Model</td><td>CTM</td><td>https://github.com/MilaNLProc/contextualized-topic-models</td></tr></table>", |
|
"text": "Gibbs Sampling for a DMM GSDMM https://github.com/rwalk/gsdmm Non-Negative Matrix Factorization NMF https://radimrehurek.com/gensim/models/nmf.html Hierarchical Dirichlet Processing HDP https://radimrehurek.com/gensim/models/hdpmodel.html", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Algorithms included in ToModAPI, with their source implementation. The original implementation of those model is in Python unless specified otherwise.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"3\">datasets present multiple differences: total number</td></tr><tr><td colspan=\"3\">of documents, distribution of documents per sub-</td></tr><tr><td colspan=\"3\">ject, and the fact that for AFP, one document can</td></tr><tr><td colspan=\"2\">have multiple subjects.</td><td/></tr><tr><td>20NG</td><td>AFP</td><td/></tr><tr><td>rec.sport.hockey</td><td>600 Politics</td><td>47277</td></tr><tr><td>soc.religion.christian</td><td>599 Sport</td><td>36901</td></tr><tr><td>rec.motorcycles</td><td>598 Economy, Business, Finance</td><td>31042</td></tr><tr><td>rec.sport.baseball</td><td>597 Unrest, Conflicts and War</td><td>21140</td></tr><tr><td>sci.crypt</td><td>595 Crime, Law and Justice</td><td>16977</td></tr><tr><td>sci.med</td><td>594 Art, Culture, Entertainment</td><td>8586</td></tr><tr><td>rec.autos</td><td>594 Social Issues</td><td>7609</td></tr><tr><td>comp.windows.x</td><td>593 Disasters and Accidents</td><td>5893</td></tr><tr><td>sci.space</td><td>593 Human Interest</td><td>4159</td></tr><tr><td>comp.os.ms-windows.misc</td><td>591 Environmental Issue</td><td>4036</td></tr><tr><td>sci.electronics</td><td>591 Science and Technology</td><td>3502</td></tr><tr><td>comp.sys.ibm.pc.hardware</td><td>590 Religion and Belief</td><td>3081</td></tr><tr><td>misc.forsale</td><td>585 Lifestyle and Leisure</td><td>3044</td></tr><tr><td>comp.graphics</td><td>584 Labour</td><td>2570</td></tr><tr><td>comp.sys.mac.hardware</td><td>578 Health</td><td>2535</td></tr><tr><td>talk.politics.mideast</td><td>564 Weather</td><td>1159</td></tr><tr><td>talk.politics.guns</td><td>546 Education</td><td>734</td></tr><tr><td>alt.atheism</td><td>480</td><td/></tr><tr><td>talk.politics.misc</td><td>465</td><td/></tr><tr><td>talk.religion.misc</td><td>377</td><td/></tr><tr><td>Total</td><td>11314 Total</td><td>125516</td></tr></table>", |
|
"text": "Comparison between topic modeling libraries. For details about the acronyms, refer to the documentation", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Number of documents per subject in 20NG</td></tr><tr><td>(20 topics) and AFP (17 topics)</td></tr></table>", |
|
"text": "", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td>CI</td></tr><tr><td/><td>20NG</td><td>wiki</td><td>20NG</td><td>wiki</td><td>20NG</td><td>wiki</td><td>20NG</td><td>wiki</td></tr><tr><td>CTM</td><td colspan=\"8\">0.56 (076)</td></tr><tr><td>LSI</td><td colspan=\"8\">0.53 (0.22) 0.41 (0.11) 0.03 (0.16) -0.04 (0.10) -3.25 (2.16) -2.64 (1.08) -1.37 (2.89) -1.69 (2.59)</td></tr><tr><td>NMF</td><td colspan=\"8\">0.61 (0.19) 0.52 (0.15) 0.10 (0.15) -0.02 (0.12) -2.37 (1.61) -3.08 (4.83) -0.03 (2.24) -1.27 (2.97)</td></tr><tr><td>PVTM</td><td colspan=\"8\">0.54 (0.09) 0.46 (0.11) 0.06 (0.04) 0.04 (0.06) -1.63 (0.82) -1.52 (0.54) 0.21 (0.92) 0.25 (0.74)</td></tr></table>", |
|
"text": ".15) 0.46 (0.24) -0.04 (0.19) -0.06 (0.16) -5.78 (5.27) -4.28 (3.94) -3.09 (4.18) -2.51 (3.95) D2T 0.57 (0.14) 0.51 (0.10) 0.01 (0.11) 0.05 (0.05) -2.94 (1.67) -2.02 (0.49) -1.56 (2.39) 0.16 (0.81) GSDMM 0.50 (0.18) 0.41 (0.20) 0.00 (0.19) -0.04 (0.09) -3.86 (2.88) -2.45 (1.04) -2.02 (3.16) -1.44 (2.26) HDP 0.44 (0.21) 0.48 (0.24) -0.09 (0.17) -0.04 (0.10) -5.59 (5.04) -3.25 (3.18) -5.59 (5.04) -2.21 (2.64) LDA 0.64 (0.14) 0.55 (0.16) 0.10 (0.08) 0.07 (0.06) -1.98 (0.68) -1.75 (0.45) 0.27 (1.30) 0.53 (0.88) LFTM 0.53 (0.09) 0.56 (0.17) -0.01 (0.10) 0.07 (0.06) -2.97 (3.15) -1.72 (0.69) -1.47 (2.47) 0.58 (0.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>C v</td><td/><td>C N P M I</td><td/><td>C U M ASS</td><td/><td>C U CI</td></tr><tr><td/><td>AFP</td><td>wiki</td><td>AFP</td><td>wiki</td><td>AFP</td><td>wiki</td><td>AFP</td><td>wiki</td></tr><tr><td>CTM</td><td colspan=\"8\">0.54 (0.15) 0.56 (0.28) -0.05 (0.17) -0.04 (0.09) -6.56 (5.94) -3.47 (2.96) -2.75 (3.73) -1.49 (2.17)</td></tr><tr><td>D2T</td><td colspan=\"8\">0.58 (0.14) 0.45 (0.10) 0.06 (0.07) -0.01 (0.07) -2.25 (0.49) -2.44 (0.73) -0.02 (0.93) -1.07 (1.42)</td></tr><tr><td colspan=\"9\">GSDMM 0.51 (0.12) 0.58 (0.17) 0.09 (0.07) 0.03 (0.11) -1.72 (0.47) -2.73 (1.31) 0.70 (0.66) -0.29 (1.59)</td></tr><tr><td>HDP</td><td colspan=\"8\">0.42 (0.10) 0.69 (0.22) 0.02 (0.07) 0.01 (0.16) -2.23 (0.92) -2.74 (2.63) -0.20 (1.05) -0.63 (2.86)</td></tr><tr><td>LDA</td><td colspan=\"8\">0.65 (0.10) 0.54 (0.11) 0.11 (0.04) 0.06 (0.06) -1.40 (0.23) -1.88 (0.48) 0.80 (0.30) 0.25 (0.89)</td></tr><tr><td>LFTM</td><td colspan=\"8\">0.59 (0.14) 0.54 (0.20) 0.06 (0.10) 0.06 (0.12) -1.97 (2.40) -1.91 (2.19) 0.11 (2.08) 0.22 (2.58)</td></tr><tr><td>LSI</td><td colspan=\"8\">0.58 (0.12) 0.55 (0.14) 0.07 (0.09) 0.05 (0.11) -1.80 (0.47) -2.59 (1.37) 0.09 (0.96) -0.36 (1.87)</td></tr><tr><td>NMF</td><td colspan=\"8\">0.67 (0.12) 0.46 (0.12) 0.13 (0.06) 0.04 (0.07) -1.27 (0.29) -1.73 (0.69) 0.95 (0.42) 0.07 (1.26)</td></tr><tr><td>PVTM</td><td colspan=\"6\">0.52 (0.12) 0.51 (0.09) 0.07 (0.06) 0.04 (0.04) -1.16 (0.34) -1.56</td><td colspan=\"2\">0.86 0.49 (0.41) 0.14 (0.63)</td></tr></table>", |
|
"text": "The mean and standard deviation of different coherence metrics computed on 2 reference corpora 20NG and Wikipedia. The models have been trained on 20NG.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "The mean and standard deviation of different coherence metrics computed on 2 reference corpora AFP and Wikipedia. The models have been trained on AFP.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Model comparison from a time (in seconds) delay standpoint for training and inference.", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |