|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:10:20.861022Z" |
|
}, |
|
"title": "Topic Modeling for Maternal Health Using Reddit", |
|
"authors": [ |
|
{ |
|
"first": "Shuang", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "New York University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Shivani", |
|
"middle": [], |
|
"last": "Pandya", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "John Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Smisha", |
|
"middle": [], |
|
"last": "Agarwal", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "John Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Jo\u00e3o", |
|
"middle": [], |
|
"last": "Sedoc", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "New York University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper applies topic modeling to understand maternal health topics, concerns, and questions expressed in online communities on social networking sites. We examine latent Dirichlet analysis (LDA) and two state-of-theart methods: neural topic model with knowledge distillation (KD) and Embedded Topic Model (ETM) on maternal health texts collected from Reddit. The models are evaluated on topic quality and topic inference, using both auto-evaluation metrics and human assessment. We analyze a disconnect between automatic metrics and human evaluations. While LDA performs the best overall with the auto-evaluation metrics NPMI and Coherence, Neural Topic Model with Knowledge Distillation is favourable by expert evaluation. We also create a new partially expert annotated gold-standard maternal health topic modeling dataset for future research.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper applies topic modeling to understand maternal health topics, concerns, and questions expressed in online communities on social networking sites. We examine latent Dirichlet analysis (LDA) and two state-of-theart methods: neural topic model with knowledge distillation (KD) and Embedded Topic Model (ETM) on maternal health texts collected from Reddit. The models are evaluated on topic quality and topic inference, using both auto-evaluation metrics and human assessment. We analyze a disconnect between automatic metrics and human evaluations. While LDA performs the best overall with the auto-evaluation metrics NPMI and Coherence, Neural Topic Model with Knowledge Distillation is favourable by expert evaluation. We also create a new partially expert annotated gold-standard maternal health topic modeling dataset for future research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Evidence suggests that poor quality maternal and newborn care is responsible for nearly 60% of the estimated 5 million deaths each year globally (Kruk et al., 2018) . This situation spouses urgent demands of artificial intelligence. AI has potential applications in automating aspects of service delivery such as basic counselling, thereby reducing the burden on health systems. In this work, we hope to help improving healthcare counselling AI by understanding maternal health related content. What are people concerning about regarding maternal health? And among state-of-the-art techniques, which model is the best at extracting and interpreting those highly-professional topics?", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 164, |
|
"text": "(Kruk et al., 2018)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "With the development of the online communities and social media, advanced text mining techniques are widely needed when applied to domains. It is necessary to understand a question in order to answer it, and the first step towards understanding the content of a document is to determine which topics that document addresses. Therefore, many successful conversational agents employ topic models to encourage more coherent dialogue (Baheti et al., 2018; Vlasov et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 430, |
|
"end": 451, |
|
"text": "(Baheti et al., 2018;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 452, |
|
"end": 472, |
|
"text": "Vlasov et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Topic models can provide legible and concise representations of both the entire corpus and individual documents because they represent topics as ranked word lists and documents regarding their probable topics. Therefore, while mainly applied to mining the text content, topic models can also classify the topics for new documents by inferring the latent topic distribution. In classical topic models like Latent Dirichlet Allocation (LDA), distributions over the latent variables are estimated with variational inference algorithms (EM) or Gibbs sampling (Blei et al., 2003) . However, recently, with the development of deep learning methods, now there are neural topic models that incorporate additional information and leverage the variational autoencoder (VAE) framework for latent variable inference. Particularly, pre-trained transformerbased language models (e.g., BERT, (Devlin et al., 2019) ) is employed to fine-tune on a wide variety of NLP problems; some models combine the advantages of pre-trained transformers and topic models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 555, |
|
"end": 574, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 877, |
|
"end": 898, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This work contributes the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Provide an exemplary application of the three topic models (LDA, Neural topic model with knowledge distillation, and Embedding Topic Model) on texts in the maternity health domain. \u2022 Evaluate the model performances from both quantitative and qualitative perspectives. The assessment may inspire ideas in metric design and model improvement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Provide a new partially annotated dataset in the maternal healthcare domain for topic modelling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There are a multitude of state-of-the-art topic models adapted from the LDA model. Neural topic models (Srivastava and Sutton, 2017) surpass traditional methods in architecture by using various forms of neural networks and can be applied to either labeled or unlabeled data. Hoyle et al. (2020) further improves neural topic models by knowledge distillation (KD). Another thread of innovations improve models in terms of semantic meaning of words, for example, Dieng et al. (2020) incorporates word embedding to topic models. Given that our dataset is unlabeled, this work focus on two state-of-the-art models, the KD model and the Embedding Topics Model (ETM), and compares their performances to LDA. We briefly summarize the details of LDA, KD and ETM in this section.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 132, |
|
"text": "(Srivastava and Sutton, 2017)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 294, |
|
"text": "Hoyle et al. (2020)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work and Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Latent Dirichlet Allocation (LDA) is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. The topic probabilities provide an explicit representation of a document. LDA follows the generative process as below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "1. For each document d, draw the topic proportion \u03b8 d from Dir(\u03b1);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "2. For each word w n in document d:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(a) Choose a topic z \u223c M ultinomial(\u03b8); (b) Choose a word w n from p(w n |z n , \u03b2), a multinomial probability conditioned on the topic z n .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In our instance, we adopted MALLET (McCallum, 2002) , a machine learning toolkit which implements LDA and conducts variational inference by Gibbs Sampling.", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 51, |
|
"text": "(McCallum, 2002)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent Dirichlet Allocation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Hoyle et al. (2020) combine neural topic modeling with knowledge distillation. It generates and stores the teacher logits (latent topics) z teacher for each document in the training set using DISTIL-BERT (Sanh et al., 2019) as a deterministic auto-encoder and training on document reconstruction as a decoder. Then it entails the framework of the neural topic model SCHOLAR but substitutes the reconstruction loss L R with L KD :", |
|
"cite_spans": [ |
|
{ |
|
"start": 204, |
|
"end": 223, |
|
"text": "(Sanh et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Topic Model with Knowledge Distillation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "L R = (w BoW d ) T log f (\u03b8 d , B) L KD = \u03bbT 2 (w teacher d ) T log f (\u03b8 d , B; T ) + (1 \u2212 \u03bb)L R f (\u03b8 d , B) = \u03c3(m + \u03b8 T d B)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Topic Model with Knowledge Distillation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "where \u03c3(\u2022) is the softmax function, w teacher d is the result of teacher logits z teacher scaled by softmax temperature T, and f (\u2022; T ) is the scaled version of f (\u2022). In doing so, the weights were updated to minimize the original reconstruction loss of SCHOLAR and the loss of reconstruction from teacher logits, which are pretrained and deterministic, so the knowledge contained in pretrained transformers is distilled to this student model. In this instance, we adopted the implementation of Hoyle et al. (2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 496, |
|
"end": 515, |
|
"text": "Hoyle et al. (2020)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Topic Model with Knowledge Distillation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Embedding Topics Model (ETM; Dieng et al., 2020) is also a generative probabilistic model that combines the useful properties of topic models and word embeddings. As a topic model, it discovers an interpretable latent semantic structure of the texts; as a word embedding, it provides a low-dimensional representation of the meaning of words. In contrast to LDA, the per-topic conditional probability of a term has a log-linear form that involves a low-dimensional representation of the vocabulary. Each term is represented by an embedding, and each topic is a point in that embedding space; the topic's distribution over terms is proportional to the exponentiated inner product of the topic's embedding and each term's embedding. ETM follows the generative process as below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding Topics Model", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "1. For each document d, draw the topic propor- tion \u03b8 d from LN (\u03b1); 2. For each word w n in document d: (a) Choose a topic z \u223c M ultinomial(\u03b8); (b) Choose w n from sof tmax(\u03c1 T \u03b1 z ), a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding Topics Model", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "multinomial probability computed by the inner product of word embedding and the embedding of topic z.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding Topics Model", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In this instance, we adopted the implementation of Dieng et al. (2020).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding Topics Model", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "3 Experimental Details 3.1 Data", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding Topics Model", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "For our project, we deploy data from the popular site Reddit spanning between December 2005 and April 2019 because of the abundance of data in almost any given topic due to the site's specialized \"subreddits\". The public data on Reddit is accessible, which provides convenience to researchers, and the specialized channel create an online community about maternal healthcare, making it easier to collect and filter a domain corpus. To make the data set more manageable and remove unwanted data, we eliminated all subreddits not in a precompiled list of relevant subreddits chosen by a healthcare professional. Finally we end up with 24k comments for training and 6k comments as hold-out samples. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Collection", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "Before applying the models on this dataset, we preprocess the data for each model to prepare valid tokens or word embeddings. For LDA and the neural topic model with knowledge distillation (KD), we preprocess the data by removing common stop words in MALLET stoplist. LDA uses MALLET's built-in tokenizer, and the other two models tokenize the text by single words and convert the word lists to bag of words. Except for the KD model which requires pretraining with the teacher model, ETM also requires pretraining the word embedding on the training documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "Mimno et al. 2011proposed coherence to measure the quality of topics. Coherence measures topic word co-occurrence across documents to detect low quality topics, and show it correlates with expert topic annotations. The Coherence of topic t is defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "coherence(t) := N \u22121 i=1 N j=i+1 log D(v t i , v t j ) D(v t i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": ", D(x) represents the number of documents word x appears in, and D(x, y) represents the number of documents x and y co-appear in. Therefore, if the words in the same topic always appear in the same documents, this topic is coherent, and the score approaches zero. Otherwise, if the words always appear in different documents, the coherence score approaches negative infinity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coherence", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "First introduced by Bouma (2009) , Normalized Pointwise Mutual Information (NPMI) is adapted from PMI. Newman et al. (2010) shows that the auto-evaluation of topic-semantic coherence using PMI is highly correlated with human evaluation, and NPMI has been widely used as a quantitative measurement of topic quality (Aletras and Stevenson, 2013; Hoyle et al., 2020) . The NPMI of topic t is defined as", |
|
"cite_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 32, |
|
"text": "Bouma (2009)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 103, |
|
"end": 123, |
|
"text": "Newman et al. (2010)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 343, |
|
"text": "(Aletras and Stevenson, 2013;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 363, |
|
"text": "Hoyle et al., 2020)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NPMI", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "N P M I(t) := N \u22121 i=1 N j=i+1 log P (v t i ,v t j ) P (v t i )P (v t j ) \u2212 log P (v t i , v t j )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NPMI", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": ".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NPMI", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "We also use Topic Uniqueness (TU) as a quantitative measurement. As the coherence and NPMI measure how well the top words within a topic share similar context, TU measures to what extend the different topics overlap. The TU for topic t is defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Uniqueness", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "T U (t) := 1 N N i=1 1 count(v t i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Uniqueness", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Uniqueness", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "count(v t i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Uniqueness", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "is the total number of times the word v t i appears in top N words across all topics. If T U (t) equals to 1, then all top words of topic t don't appear in top words of any other topics. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Uniqueness", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "Two experts in public health identified topics generated by the 10-topic and 25-topic models. By reading through the top words for each topic, one expert annotates effective words and summarize the topic, and the other checks the annotation and topic names, and makes complement. They make an agreement on the final assessment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Assessment", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "ETM dismisses more topics than the other two models. When looking for 10 topics with each model, there are a total of 10 topics identified from all topics generated by the three models, and only 5 of them are identified in all of the three models: \"Pregnancy\", \"Abortion\", \"Vaginal health\", \"Menstruation\", \"Labor and delivery\". Besides that, only the KD model generated topics which are identified as \"Penile health\", \"Breastfeeding\" and \"pregnancy symptoms\". ETM also generated the least identifiable topics when looking for 25 topics. Out of the 13 topics identified, ETM dismisses \"Sex\", \"Maternity and paternity leave\" and \"Stis\", and it had 4 topics that cannot be identified as maternity-related.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Assessment", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Moreover, the KD model has the most representative top words. The expert was asked to annotate related words among the 20 top words within each topic, and the KD model achieves 17.8 and 15.7 words on average for the 10-topic and 25-topic models respectively, much more than LDA and ETM, as shown in Table 4 4.3 Inference", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 299, |
|
"end": 306, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Assessment", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To infer the topics for the new documents held-out from the training data, we sampled 10 comments from the dev set and inferred their topics one by one. According to the performance reported above, we selected LDA 25 and KD 25 to make the inference. Among the 10 comments displayed in Appendix B, KD had better topic classification. LDA and KD misclassified comment 1 as \"penile health\" and \"vaginal health\" respectively, while the comment is related to \"pregnancy\". Comment 6 was also misclassified by both. Comment 8 was misclassified as \"pregnancy\" by LDA but classified as \"abortion\" correctly by KD. Comment 9 was classified as \"pregnancy\" by LDA, which was correct, but KD more accurately classification as \"pregnancy risk\". All other comments are correctly classified by both models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Assessment", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Measurement Incompatibility There is an incompatibility between human assessment and the auto-evaluation metrics for topic quality. In terms of the auto-evaluation metric Coherence, LDA always performs the best, which is inconsistent with the human assessment. In terms of NPMI, KD dominates in the 10-topic setting, but LDA still performs better in the 25-topic, 50-topic, and 100topic settings. In terms of Topic Uniqueness, KD is the best in 10-topic, 25-topic setting, which is the same as human evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Although fixed values are set for the number of topics when training on the corpus, fewer topics could be identified from the generated word lists because of the overlapping among the lists. For example, when generating 10 topics, KD has 9 topics identified by the expert, while topic 3 and topic 5 are categorized as \"vaginal health\". For ETM, there are more overlapped generated lists: topic \"pregnancy\", \"vaginal health\", \"menstruation\", and \"labor and delivery\" all include 2 generated lists. LDA KD ETM n/a 1 / 1 -/ 1 -/ 4 pregnancy 1 / 5 1 / 3 2 / 4 birth control 1 / 3 1 / 5 -/ 1 abortion 1 / 2 1 / 2 1 / 3 menstruation 2 / 3 1 / 2 2 / 3 vaginal health 1 / 2 2 / 3 2 / 3 penile health -/ 1 1 / 1 -/ 1 sex 1 / 1 -/ 1 1 /breastfeeding -/ 1 1 / 1 -/ 1 pregnancy symptoms -/ 1 1 / --/ 2 labor and delivery 2 / 2 1 / 3 2 / 2 pregnancy risks LDA KD ETM 10-topic 6.7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Repetitiveness", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-/ 1 -/ 1 -/ 1 maternity leave -/ 1 -/ 1 -/ - stis -/ 1 -/ 1 -/ -", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Repetitiveness", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "17.8 8.7 25-topic 9.5 15.7 10.6 Tables 2 and 5 , LDA and ETM include more frequently appeared words such as \"feel\", \"period\" and \"life\" in top word lists, the top words of each topic generated by KD is professional and of lowfrequency.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 46, |
|
"text": "Tables 2 and 5", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Topic Repetitiveness", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We applied and compared three different topic models on the texts related to maternal health and discovered topics with high-quality. By both autoevaluation and human assessment, we evaluated the topic quality generated by different models and observed incompatibility among metrics. Taking the human assessment as a gold standard, we would LDA KD ETM 10-topic 0.1102 0.0132 0.0558 25-topic 0.0881 0.0280 0.0441 50-topic 0.0805 0.0275 0.0424 100-topic 0.0747 0.0250 0.0407 200-topic 0.0662 0.0240 0.0432 Table 5 : Average word frequencies for topics generated. The frequency of a word wis represented with the proportion of documents which contain the word w.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 504, |
|
"end": 511, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "recommend applying the neural topic model with knowledge distillation, because it tends to rank professional words that are unique to one topic as top words, and it covers more diverse topics when asked to generate a fixed number of topics. Regarding topic inference on new documents, both LDA and the knowledge-distilled model perform well. The experiment results are shown in Table 6 , Table 7 and Table 8 We trained the LDA model with 4 CPU cores and trained the other two model with a single GPU core. Table 9 shows that LDA implemented by Mallet is quite faster than the other two models, and KD is the most time-consuming one because of the pre-training of teacher model. But once the logits of teacher model has been saved, it is fast to train the student models with various number of topics.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 378, |
|
"end": 385, |
|
"text": "Table 6", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 407, |
|
"text": "Table 7 and Table 8", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 506, |
|
"end": 513, |
|
"text": "Table 9", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Comment 1 I was waiting to see if someone brought this up! Alcohol and tobacco are not the same. Moderate alcohol use (one drink a day or less) hasn't been shown to be harmful to a pregnancy. ANY tobacco use is harmful to a pregnancy. I would feel much more comfortable refusing to serve hooka to a pregnant woman that refusing to serve alcohol. It's actually been shown that moderate alcohol consumption can be beneficial (-drinking-pregnant-women.html)! LDA KD ETM 10-topic 0.61 1.00 0.77 25-topic 0.55 0.72 0.71 50-topic 0.48 0.50 0.52 100-topic 0.37 0.33 0.34 200-topic 0.30 0.22 0.19 Comment 6 Colon cancer can be *prevented* by regular colonoscopies. They remove precancerous polyps while they're up there so you won't ever develop it in the first place. It honestly has to be one of the most preventable cancers, and too many people know nothing about it. If you're over 40, get the pooper scope! It's a good decision. (LDA -vaginal health, KD -vaginal health)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Appendix: Examples for Inference", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Comment 7 They make some very small models (check a comparison chart for the actual sizes), and keep in mind that cups are easier to insert than tampons for a lot of people (like me), because tampons are dry. It's pretty easy to moisten cups with a bit of water (or lube, even), since you're in the bathroom anyway. [Here's the latest comparison chart I could find.] For me, they've been a lifesafer since I can't use tampons, pads and my sensitive skin don't get along, and because I have a very heavy flow. They're not for everyone, but they're definitely the most comfortable alternative I've tried. (LDA -menstruation, KD -menstruation)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Appendix: Examples for Inference", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Comment 8 Well, kinda yeah. there are other ways to keep yourself from having a baby besides abortions, and tbh, abstinence isn't the end of the world, lol. It isn't fair of anyone to think its okay to destroy the potential of an unborn child because they simply don't want to deal with the consequences of a certain lifestyle. Change your lifestyle, or use better protection, or have the child anyway, but don't just write off the life of a fetus like it is worth nothing, because it isn't.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Appendix: Examples for Inference", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(LDA -pregnancy, KD -abortion)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Appendix: Examples for Inference", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Comment 9 My cycles were off, so I had a decent wait. I went to the health dept to get confirmation and health insurance, then scheduled a drs appointment which was a week or two away, the doctor guessed I was about 6-7 weeks, though my lmp said I was 14 weeks because he couldn't find a heartbeat with Doppler and he was feeling for my uterus. Then I was called the next day with a date for an ultrasound about two weeks away and was told to go back to get the rest of my blood drawn, as they only took one vial the day before. When I should have been 16 weeks, I was 12 weeks, as it showed on the ultrasound. Ultimately, you may as well wait. Waiting is something you deal with all of pregnancy, and your wait is really short, so if anything, I'd wait for the appointment and express your angst to find out how far along she is in hopes of getting things moving. (LDA -pregnancy, KD -pregnancy risks)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Appendix: Examples for Inference", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Comment 10 I've had my Mirena for nearly 5 years (getting it replaced this summer) and I'm one of the lucky ones where insertion didn't hurt at all and compared to when I was on the pill, my sex drive went waaaaay up. Not sure if it returned to \"normal\" or not since I had been on the pill for years and years before I became sexually active so I have nothing to compare it to, but my boyfriend is a lot happier now :p I also haven't had a \"real\" period in about 4 years, as compared to every two to three weeks even on the pill. Needless to say, I'm a huge personal Mirena fan :) (LDA -birth control, KD -birth control)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Appendix: Examples for Inference", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/jsedoc/maternal_ health_topics", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Experimental Results4.1 Automatic EvaluationWe train each model under 10, 25, 50, 100, and 200 topic settings, and compare them with respect to the three metrics.Table 6, Table 7 and Table 8in Appendix A show the performances of different models in terms of Coherence, NPMI and Topic Uniqueness respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Evaluating topic coherence using distributional semantics", |
|
"authors": [ |
|
{ |
|
"first": "Nikolaos", |
|
"middle": [], |
|
"last": "Aletras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Stevenson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 10th International Conference on Computational Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "13--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikolaos Aletras and Mark Stevenson. 2013. Evalu- ating topic coherence using distributional semantics. In Proceedings of the 10th International Conference on Computational Semantics, pages 13-22.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Generating more interesting responses in neural conversation models with distributional constraints", |
|
"authors": [ |
|
{ |
|
"first": "Ashutosh", |
|
"middle": [], |
|
"last": "Baheti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3970--3980", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1431" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashutosh Baheti, Alan Ritter, Jiwei Li, and Bill Dolan. 2018. Generating more interesting responses in neural conversation models with distributional con- straints. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3970-3980, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Latent dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of machine Learning research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of ma- chine Learning research, 3(Jan):993-1022.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Normalized (pointwise) mutual information in collocation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Gerlof", |
|
"middle": [], |
|
"last": "Bouma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Biennial GSCL Conference", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "31--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. In Proceed- ings of the Biennial GSCL Conference, 1:31-40.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Bert: pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, 1:4171-4186.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Topic modeling in embedding spaces. Transactions of the Association for", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Adji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dieng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Francisco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Ruiz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "439--453", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adji B. Dieng, Francisco J.R. Ruiz, and David M. Blei. 2020. Topic modeling in embedding spaces. Trans- actions of the Association for Computational Lin- guistics, 3:439-453.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Improving neural topic models using knowledge distillation", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Hoyle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Goel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1752--1771", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Hoyle, Pranav Goel, and Philip Resnik. 2020. Improving neural topic models using knowl- edge distillation. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing, pages 1752-1771.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "High-quality health systems in the sustainable development goals era: time for a revolution", |
|
"authors": [ |
|
{ |
|
"first": "Margaret", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Kruk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Gage", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Catherine", |
|
"middle": [], |
|
"last": "Arsenault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keely", |
|
"middle": [], |
|
"last": "Jordan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannah", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Leslie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanam", |
|
"middle": [], |
|
"last": "Roder-Dewan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olusoji", |
|
"middle": [], |
|
"last": "Adeyi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Barker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernadette", |
|
"middle": [], |
|
"last": "Daelmans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Doubova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "English", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ezequiel", |
|
"middle": [ |
|
"Garc\u00eda" |
|
], |
|
"last": "Elorrio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frederico", |
|
"middle": [], |
|
"last": "Guanais", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oye", |
|
"middle": [], |
|
"last": "Gureje", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Hirschhorn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lixin", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Kelley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ephrem", |
|
"middle": [ |
|
"Tekle" |
|
], |
|
"last": "Lemango", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jerker", |
|
"middle": [], |
|
"last": "Liljestrand", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "The Lancet Global Health", |
|
"volume": "6", |
|
"issue": "11", |
|
"pages": "1196--1252", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/S2214-109X(18)30386-3" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Margaret E. Kruk, Anna D. Gage, Catherine Arsenault, Keely Jordan, Hannah H. Leslie, Sanam Roder- DeWan, Olusoji Adeyi, Pierre Barker, Bernadette Daelmans, Svetlana V. Doubova, Mike English, Eze- quiel Garc\u00eda Elorrio, Frederico Guanais, Oye Gureje, Lisa R. Hirschhorn, Lixin Jiang, Edward Kelley, Ephrem Tekle Lemango, Jerker Liljestrand, Address Malata, Tanya Marchant, Malebona Precious Mat- soso, John G. Meara, Manoj Mohanan, Youssoupha Ndiaye, Ole F. Norheim, K. Srinath Reddy, Alexan- der K. Rowe, Joshua A. Salomon, Gagan Thapa, Nana A. Y. Twum-Danso, and Muhammad Pate. 2018. High-quality health systems in the sustainable development goals era: time for a revolution. The Lancet Global Health, 6(11):e1196-e1252.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Mallet: A machine learning for language toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Andrew Kachites", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. Http://mallet.cs.umass.edu.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Optimizing semantic coherence in topic models", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mimno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanna", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Wallach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edmund", |
|
"middle": [], |
|
"last": "Talley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miriam", |
|
"middle": [], |
|
"last": "Leenders", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "262--272", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Mimno, Hanna M. Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 262-272.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Automatic evaluation of topic coherence", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Newman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jey", |
|
"middle": [ |
|
"Han" |
|
], |
|
"last": "Lau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Grieser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human language technologies: The 2010 annual conference of the North American chapter of the association for computational linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "100--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Newman, Jey Han Lau, Karl Grieser, and Tim- othy Baldwin. 2010. Automatic evaluation of topic coherence. In Human language technologies: The 2010 annual conference of the North American chap- ter of the association for computational linguistics, pages 100-108.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.01108" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Autoencoding variational inference for topic models", |
|
"authors": [ |
|
{ |
|
"first": "Akash", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Sutton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1703.01488" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Akash Srivastava and Charles Sutton. 2017. Autoen- coding variational inference for topic models. arXiv preprint arXiv:1703.01488.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Performances of different models. The top plot shows the coherence scores and the NPMI scores, and the bottom plot shows the topic uniqueness.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Method Topic Words LDA life abortion child would fetus think woman right human people person mother choice body baby pro women want abortions one KD murder alive human fetus personhood rights zygote argument fetuses womb begins killing survive pro clump embryo viable debate development life ETM abortion life child fetus people woman human women point person body make mother abortions baby medical support agree choice circumcision", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Topic words associated with the topic labelled as abortion. Bolded words are marked as topic relevant.", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table><tr><td>: Identified topics counts for 10-topics and 25-</td></tr><tr><td>topic models (separated by '/')</td></tr></table>", |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Average number of effective top words on top-", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"content": "<table><tr><td/><td>LDA</td><td>KD</td><td>ETM</td></tr><tr><td>10-topic</td><td>0.1181</td><td colspan=\"2\">0.1469 0.1243</td></tr><tr><td>25-topic</td><td>0.1401</td><td colspan=\"2\">0.1212 0.1384</td></tr><tr><td>50-topic</td><td>0.1460</td><td colspan=\"2\">0.1107 0.1347</td></tr><tr><td colspan=\"4\">100-topic 0.13190 0.1041 0.1105</td></tr><tr><td colspan=\"2\">200-topic 0.0997</td><td colspan=\"2\">0.0862 0.1073</td></tr></table>", |
|
"num": null, |
|
"text": "Coherence for topics generated", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF8": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "NPMI for topics generated", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF9": { |
|
"content": "<table><tr><td/><td>LDA KD</td><td>ETM</td></tr><tr><td>pre-train</td><td colspan=\"2\">teacher model embedding</td></tr><tr><td/><td>6409 s</td><td>10 s</td></tr><tr><td>10-topic</td><td>111 s 364 s</td><td>1112 s</td></tr><tr><td>25-topic</td><td>131 s 186 s</td><td>1121 s</td></tr><tr><td>50-topic</td><td>148 s 236 s</td><td>1110 s</td></tr><tr><td colspan=\"2\">100-topic 167 s 340 s</td><td>1062 s</td></tr><tr><td colspan=\"2\">200-topic 204 s 547 s</td><td>1086 s</td></tr></table>", |
|
"num": null, |
|
"text": "Topic Uniqueness for topics generated", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF10": { |
|
"content": "<table><tr><td>: Training time for different models</td></tr><tr><td>(LDA -penile health, KD -vaginal health)</td></tr><tr><td>Comment 2 So will you be back for a follow up</td></tr><tr><td>pap in a few months or getting the colposcopy right</td></tr><tr><td>away? A lot of these clear up on their own. Try not</td></tr><tr><td>to panic!</td></tr><tr><td>(LDA -vaginal health, KD -vaginal health)</td></tr><tr><td>Comment 3 I was on Loestrin24Fe for about a</td></tr><tr><td>year but at my February yearly gyno appointment,</td></tr><tr><td>I asked my doctor about a pill to give me less pe-</td></tr><tr><td>riods because mine, like yours, were *brutal.* He</td></tr><tr><td>switched me to Lo Loestrin Fe and I haven't had</td></tr><tr><td>a period since! It's AWESOME. I've also noticed</td></tr><tr><td>that I am not as emotional as I was on L24Fe. Don't</td></tr><tr><td>get me wrong -For about the first month, I was su-</td></tr><tr><td>per sensitive to everything anyone said to me. But</td></tr><tr><td>since then, I've been fine. My sex drive is sky high</td></tr><tr><td>as always -but I don't think ANYTHING could</td></tr><tr><td>kill my libido! Haha. I have not gained any weight</td></tr><tr><td>since starting this pill. Hope this helps!</td></tr><tr><td>(LDA -birth control, KD -birth control)</td></tr><tr><td>Comment 4 Personally, I don't remember dis-</td></tr><tr><td>tinctly having that feeling (I had an epidural), but</td></tr><tr><td>I obviously would have no idea if I would have</td></tr><tr><td>otherwise. I think it was mostly just because I was</td></tr><tr><td>way too exhausted to feel anything but relief (over</td></tr><tr><td>30 hours of labor, and no sleep), also if anything, I</td></tr><tr><td>think the pitocin interfered with it since that's syn-</td></tr><tr><td>thetic oxytocin. However, I totally felt an instant</td></tr><tr><td>bond with my son and would not put him down</td></tr><tr><td>those first few hours, I didn't even go to sleep.</td></tr><tr><td>(LDA -labor and delivery, KD -labor and delivery)</td></tr><tr><td>Comment 5 The only reason I ever heard about</td></tr><tr><td>not being able to eat is in case of an emergency</td></tr><tr><td>c-section where you are put under general anesthe-</td></tr><tr><td>sia. I guess it depends on where you are and what</td></tr><tr><td>they do, but general anesthesia is not the common</td></tr><tr><td>practice for emergency c-section anymore. If you</td></tr><tr><td>were already having an epidural they would work</td></tr></table>", |
|
"num": null, |
|
"text": "That way you are still awake for the operation and there is minimal risk. When getting an epidural, a common side effect is a drop in blood pressure so an IV of fluids is introduced to offset this. That would be one reason to have IV fluids", |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |