ACL-OCL / Base_JSON /prefixL /json /louhi /2021.louhi-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:10:27.360463Z"
},
"title": "Cluster Analysis of Online Mental Health Discourse using Topic-Infused Deep Contextualized Representations",
"authors": [
{
"first": "Atharva",
"middle": [],
"last": "Kulkarni",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Amey",
"middle": [],
"last": "Hengle",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Pradnya",
"middle": [],
"last": "Kulkarni",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sir Parashurambhau College",
"location": {
"settlement": "Pune",
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Manisha",
"middle": [],
"last": "Marathe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Savitribai Phule Pune University",
"location": {
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "With mental health as a problem domain in NLP, the bulk of contemporary literature revolves around building better mental illness prediction models. The research focusing on the identification of discussion clusters in online mental health communities has been relatively limited. Moreover, as the underlying methodologies used in these studies mainly conform to the traditional machine learning models and statistical methods, the scope for introducing contextualized word representations for topic and theme extraction from online mental health communities remains open. Thus, in this research, we propose topic-infused deep contextualized representations, a novel data representation technique that uses autoencoders to combine deep contextual embeddings with topical information, generating robust representations for text clustering. Investigating the Reddit discourse on Post-Traumatic Stress Disorder (PTSD) and Complex Post-Traumatic Stress Disorder (C-PTSD), we elicit the thematic clusters representing the latent topics and themes discussed in the r/ptsd and r/CPTSD subreddits. Furthermore, we also present a qualitative analysis and characterization of each cluster, unraveling the prevalent discourse themes.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "With mental health as a problem domain in NLP, the bulk of contemporary literature revolves around building better mental illness prediction models. The research focusing on the identification of discussion clusters in online mental health communities has been relatively limited. Moreover, as the underlying methodologies used in these studies mainly conform to the traditional machine learning models and statistical methods, the scope for introducing contextualized word representations for topic and theme extraction from online mental health communities remains open. Thus, in this research, we propose topic-infused deep contextualized representations, a novel data representation technique that uses autoencoders to combine deep contextual embeddings with topical information, generating robust representations for text clustering. Investigating the Reddit discourse on Post-Traumatic Stress Disorder (PTSD) and Complex Post-Traumatic Stress Disorder (C-PTSD), we elicit the thematic clusters representing the latent topics and themes discussed in the r/ptsd and r/CPTSD subreddits. Furthermore, we also present a qualitative analysis and characterization of each cluster, unraveling the prevalent discourse themes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Due to their ubiquitous nature, online health communities and social media platforms have emerged as a conducive means of information exchange and social support, especially for people with stigmatized concerns such as mental health. Consequently, these platforms provide a rich ecosystem for mental health clinicians, researchers, and practitioners to analyze the cornucopia of user-generated content and study the underlying mechanisms of different mental health conditions. With the rapid headways in artificial intelligence and computational linguistics, an increasingly large number of researchers have leveraged social media content to study various mental health illnesses and psychiatric conditions. While most of the research has focused on using the traditional machine learning models and statistical methods for predicting mental illness from social media posts, the studies addressing the discourse analysis (De Choudhury and De, 2014; Silveira Fraga et al., 2018; Loveys et al., 2018) and identification of clusters in online mental health communities has been relatively modest. Even with the recent surge of complex attention-based deep learning models, a large chunk of the research regarding mental health issues has focused on building better predictive systems (Benton et al., 2017; Kirinde Gamaarachchige and Inkpen, 2019; Jiang et al., 2020; Sekulic and Strube, 2019) with less emphasis on using these models for mental health related corpus analysis or information extraction.",
"cite_spans": [
{
"start": 921,
"end": 948,
"text": "(De Choudhury and De, 2014;",
"ref_id": "BIBREF12"
},
{
"start": 949,
"end": 977,
"text": "Silveira Fraga et al., 2018;",
"ref_id": "BIBREF33"
},
{
"start": 978,
"end": 998,
"text": "Loveys et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 1281,
"end": 1302,
"text": "(Benton et al., 2017;",
"ref_id": "BIBREF1"
},
{
"start": 1303,
"end": 1343,
"text": "Kirinde Gamaarachchige and Inkpen, 2019;",
"ref_id": "BIBREF17"
},
{
"start": 1344,
"end": 1363,
"text": "Jiang et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 1364,
"end": 1389,
"text": "Sekulic and Strube, 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With mental health already coalesced as an appreciable public health burden, research to investigate the discourse clusters prevalent on social media is of paramount importance. Mining information from the emergent clusters provides a lens over the dominant themes of discussion, the discourse anatomy, and the dialogue structure in the online forums while also helping to comprehend the general public engagement, sentiment, ideas, and views regarding mental health. Successful research in this direction can potentially foster identification of high-risk groups, enhanced mental health patient education programs, better diagnostic and therapeutic theory building, as well as an improved understanding of the underlying design of the online mental health communities .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Post-traumatic stress disorder (PTSD) is a mental disorder resulting from traumatic experiences that leads to reliving the trauma, avoidance of certain situations, and hyper-vigilance. Similar to PTSD, complex post-traumatic stress disorder (C-PTSD) is a condition that formulates the reaction resulted from the trauma, such as uncontrollable emotions, dissociation, negative self-perception, anger, mistrust, and interpersonal difficulties. Thus, in this research, we examine the online discourse on Reddit, focussing on PTSD and C-PTSD as use cases to elicit different thematic clusters present in them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prior research has shown that the addition of topic information to pre-trained contextualized representations yields performance improvement for semantic textual similarity (Peinelt et al., 2020) . While Peinelt et al. (2020) integrated the two representations by simple concatenation, a better methodology to integrate representations from different embedding spaces is argued by Bollegala and Bao (2018) . The meta-embeddings proposed by Bollegala and Bao (2018) are learned as the intermediate representations generated by various autoencoder variants. Thus, building on these two findings, we propose topic-infused deep contextualized representations, a novel data representation technique that uses a concatenated denoise autoencoder to combine deep contextual embeddings with topic information for generating robust document representations. Our methodology spawns document representations that subsume the topic information from Latent Dirichlet Allocation (Blei et al., 2003) with the contextual embeddings generated by pre-trained RoBERTa model (Liu et al., 2019) . We further demonstrate that the proposed methodology achieves improvement for text clustering against the contextual embeddings generated by the pretrained RoBERTa model. In the light of the above discussion, our research makes the following contributions:",
"cite_spans": [
{
"start": 173,
"end": 195,
"text": "(Peinelt et al., 2020)",
"ref_id": null
},
{
"start": 204,
"end": 225,
"text": "Peinelt et al. (2020)",
"ref_id": null
},
{
"start": 381,
"end": 405,
"text": "Bollegala and Bao (2018)",
"ref_id": "BIBREF3"
},
{
"start": 440,
"end": 464,
"text": "Bollegala and Bao (2018)",
"ref_id": "BIBREF3"
},
{
"start": 964,
"end": 983,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 1054,
"end": 1072,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We extend the methodology of word metaembeddings to document meta-embeddings by proposing topic-infused deep contextualized representations, a data representation technique that uses a concatenated denoise autoencoder to combine deep contextual embeddings with topical information for generating robust representations for text clustering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We carry out a qualitative analysis and characterization of each cluster from a clinical psychology perspective. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Traditionally, the research in topic mining and theme extraction from online mental health communities has been focused on the use of probabilistic generative models like the Latent Dirichlet Allocation (Blei et al., 2003) and clustering techniques such as the k-means (Sch\u00fctze et al., 2008) . Carron-Arthur et al. (2016) employed LDA to extract topics from the internet support group BlueBoard. Results showed that users engaged in discussions with a greater topical focus on experiential knowledge, disclosure, and informational support, a pattern resembling the clinical symptom-focused approach to recovery. In their study, Dao et al. (2017) used the Hierarchical Dirichlet Process (HDP) algorithm to infer latent topics from blog posts of the Live-Journal (LJ) blogging site. The authors applied the non-parametric affinity propagation algorithm to find clusters within the online communities. Toulis and Golab (2017) compared the recurring themes encountered in private journals with the ones found in the online communities of Reddit and found significant similarities in the topics discussed across both the forums. provide an exhaustive analysis of the thematic overlap, similarity, and difference in online mental health communities of r/depression, r/anxiety, and r/ptsd. Their results show a considerable overlap of themes between the mental health groups, attesting that people engaging in such forums face common problems and comorbidity symptoms.",
"cite_spans": [
{
"start": 203,
"end": 222,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 269,
"end": 291,
"text": "(Sch\u00fctze et al., 2008)",
"ref_id": "BIBREF30"
},
{
"start": 294,
"end": 321,
"text": "Carron-Arthur et al. (2016)",
"ref_id": "BIBREF7"
},
{
"start": 628,
"end": 645,
"text": "Dao et al. (2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Since their introduction, transformer-based language models such as BERT (Devlin et al., 2019) have led to impressive performance gains across multiple NLP tasks. Recent works show that these contextualized representations can also support type-level clusters, and hence, can be effectively used for modeling topics (Sia et al., 2020) . The recent work by Thompson and Mimno (2020) demonstrates that running simple clustering algorithms like k-means on contextualized word representations result in word clusters that share similar properties to the ones generated by LDA. An interesting approach is followed by Peinelt et al. (2020) , where the authors combine the topic models of LDA with contextualized word representations of BERT for the task of semantic similarity detection. Results depict that adding topical information improves performance, especially for examples with domainspecific terms.",
"cite_spans": [
{
"start": 73,
"end": 94,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 316,
"end": 334,
"text": "(Sia et al., 2020)",
"ref_id": "BIBREF32"
},
{
"start": 356,
"end": 381,
"text": "Thompson and Mimno (2020)",
"ref_id": "BIBREF35"
},
{
"start": 612,
"end": 633,
"text": "Peinelt et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For this experiment, we selected the r/ptsd 1 and r/CPTSD 2 subreddits which have 54,000 and 97,000 active users, respectively. Using the Pushift API 3 , we crawled all the available posts from these subreddits between 1st August 2015 and 31st July 2020.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "3"
},
{
"text": "To ensure that each selected post has community approval, we selected posts that have a minimum of 10 net upvotes. We further filter our dataset by eliminating posts with less than 75% English content 4 , posts with less than five words, as well as posts with [DELETED] , [UPDATED] , and [REMOVED] entries. We employed standard text cleaning and normalization techniques for preprocessing the posts, including removing special characters, accented words, wordplays, URLs, replacing acronyms with full forms, and expanding contractions 5 . This resulted in a comprehensive dataset of 24,930 posts.",
"cite_spans": [
{
"start": 260,
"end": 269,
"text": "[DELETED]",
"ref_id": null
},
{
"start": 272,
"end": 281,
"text": "[UPDATED]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "3"
},
{
"text": "The proposed system consists of two key components: a robust data representation methodology and an efficient clustering algorithm. Figure 1 depicts the model architecture. Each component is elucidated in detail as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 140,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Proposed Methodology",
"sec_num": "4"
},
{
"text": "In this section, we posit the methodology to generate the topic-infused contextualized representations, using a multi-input concatenated denoise autoencoder. The proposed autoencoder architecture has three inputs namely: the document topic distribution of the post's selftext 6 , contextualized document embedding of the post's selftext, and the contextualized document embedding of the post's title. Let S 1 , S 2 , and S 3 denote the corresponding three input embedding spaces of dimensions d 1 , d 2 , and d 3 , respectively. Let N be the total number of posts. For each post p \u2208 N , the three document embeddings are given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic-infused Deep Contextualized Representations",
"sec_num": "4.1"
},
{
"text": "s 1 (p) \u2208 R d 1 , s 2 (p) \u2208 R d 2 , and s 3 (p) \u2208 R d 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic-infused Deep Contextualized Representations",
"sec_num": "4.1"
},
{
"text": "The autoencoder model consists of three encoders E 1 , E 2 , and E 3 which encode the source embeddings to a common meta-embedding space M of dimensionality d m . Each encoder independently performs dimensionality reduction and non-linear transformations on the respective embeddings, thus, learning to retain essential information from each source embedding. Dimensionalities of the encoded input embeddings are denoted respectively, by ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic-infused Deep Contextualized Representations",
"sec_num": "4.1"
},
{
"text": "d 1 , d 2 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic-infused Deep Contextualized Representations",
"sec_num": "4.1"
},
{
"text": "m(p) = E 1 (s 1 (p))\u2295E 2 (s 2 (p))\u2295E 3 (s 3 (p)) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic-infused Deep Contextualized Representations",
"sec_num": "4.1"
},
{
"text": "Therefore, the dimensionality of document metaembedding space M is computed as the sum of dimensionalities of encoded source embeddings. It is given by Equation 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic-infused Deep Contextualized Representations",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d m = d 1 + d 2 + d 3",
"eq_num": "(2)"
}
],
"section": "Topic-infused Deep Contextualized Representations",
"sec_num": "4.1"
},
{
"text": "The three decoders D 1 , D 2 , and D 3 , try to reconstruct the individual source embeddings from the document meta-embeddings, thereby implicitly utilizing the common and the complementary information present in the source embeddings. Equations 3, 4, and 5 represent the reconstructed versions of the source embeddings, given by\u015d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic-infused Deep Contextualized Representations",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 (p), s 2 (p), and\u015d 3 (p). s 1 (p) = D 1 (m(p)) (3) s 2 (p) = D 2 (m(p)) (4) s 3 (p) = D 3 (m(p))",
"eq_num": "(5)"
}
],
"section": "Topic-infused Deep Contextualized Representations",
"sec_num": "4.1"
},
{
"text": "The objective loss L is calculated as the sum of the reconstruction loss for each of the three input embeddings. It is formulated in Equation 6. Thus, the proposed system jointly learns E 1 , E 2 , E 3 , and D 1 , D 2 , D 3 , such that the loss given by Equation 6 is minimized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic-infused Deep Contextualized Representations",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = p\u2208N (||\u015d 1 (p) \u2212 s 1 (p)|| 2 + ||\u015d 2 (p) \u2212 s 2 (p)|| 2 + ||\u015d 3 (p) \u2212 s 3 (p)|| 2 )",
"eq_num": "(6)"
}
],
"section": "Topic-infused Deep Contextualized Representations",
"sec_num": "4.1"
},
{
"text": "In this study, we make use of HDBSCAN ( Hierarchical Density-Based Spatial Clustering of Applications with Noise), a hierarchical densitybased unsupervised clustering technique to congregate semantically-similar posts together in clusters (McInnes et al., 2017) . Unlike partition-based clustering algorithms, HDBSCAN can find varying density clusters and is more robust to parameter selection. Moreover, HDBSCAN does not force the data points to belong to any particular cluster, making it suitable for handling outliers and noisy data. As HDBSCAN uses relative-distance measures for clustering, it often suffers from the curse of dimensionality (McInnes et al., 2017) . As the dimension of the topic-infused contextualized embeddings is quite high (d m = 768), we employ UMAP (Uniform Manifold Approximation Projection), a technique for general non-linear dimensionality reduction (McInnes et al., 2018) . UMAP is preferred over other dimensionality reduction algorithms as it keeps a significant portion of the high-dimensional local structure in lower dimensionality, thus, causing minimal information loss.",
"cite_spans": [
{
"start": 239,
"end": 261,
"text": "(McInnes et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 647,
"end": 669,
"text": "(McInnes et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 883,
"end": 905,
"text": "(McInnes et al., 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dimensionality Reduction and Clustering",
"sec_num": "4.2"
},
{
"text": "The document topic distributions are generated using the LDA mallet python version 7 . As the topics of interest centers around entities that are mostly nouns, we follow a nouns-only approach as employed by Martin and Johnson (2015) for topic modeling. The number of topics is empirically chosen as 10 since it displayed the best topic coherence score (c v) of 0.49. The rest of the hyperparameters for LDA are kept at default. The 768-dimensional contextualized document embeddings are generated as the average of all the embeddings for each word in the document, extracted from the second last layer of the pre-trained RoBERTa-base model (Liu et al., 2019) . Thus, the three input d 1 , d 2 , and d 3 are of dimensions 10, 768, and 768, respectively.",
"cite_spans": [
{
"start": 207,
"end": 232,
"text": "Martin and Johnson (2015)",
"ref_id": "BIBREF22"
},
{
"start": 640,
"end": 658,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "In our experiments, each autoencoder is implemented as a single hidden layer neural network. The dimensions d 1 , d 2 , and d 3 are chosen as 10, 379, and 379, respectively. The hidden dimensions are chosen as such so that the topic-infused deep contextualized representations are of dimensions 768, making them comparable with that of RoBERTa. Masking noise of 10 percent is applied to the source embeddings before encoding (Vin- (Maas et al., 2013) activation is applied to each layer with the default parameter setting. The model is trained end-to-end for 200 epochs, with the Adam optimizer (Kingma and Ba, 2015), a learning rate of 0.001, and a mini-batch size of 128 for minimizing the objective loss. The learning rate is reduced by a factor of 0.1 if validation loss does not decline after 10 successive epochs. The model with the best loss is used for prediction.",
"cite_spans": [
{
"start": 431,
"end": 450,
"text": "(Maas et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "In order to assess the clustering performance, we make use of three measures, namely the Silhouette Coefficient (SC) (Rousseeuw, 1987) , the Calinski-Harabasz Index (CHI) (Cali\u0144ski and Harabasz, 1974) , and the Davies-Bouldin Index (DBI) (Davies and Bouldin, 1979) . Higher SC, and CHI, and lower DBI scores indicate a better separation of the clusters and tighter integration inside the clusters. We compare the clustering performance of three embedding sets, namely: the RoBERTa last layer embeddings, the RoBERTa second last layer embeddings, and our proposed topic-infused deep contextualized embeddings. Figures 3, 4 , and 5 depict the performance comparison of the corresponding three embeddings with respect to the three metrics mentioned above. The y-axis represents the three corresponding metrics, whereas the x-axis represents the UMAP component variations. The hyperparameter values of the number of neighbours and minimum distance were chosen as 30 and 0.0, respectively, while cosine similarity was used as the metric for computing the distance in the ambient space of the input data. The random state is seeded to an integer value (42). For HDB-SCAN, the minimum cluster size was selected as 300, with the other parameters set to the default values (McInnes et al., 2017) . From the comparative analysis, it is evident that the proposed topicinfused deep contextualized representations result in an improved clustering performance across all three metrics. For our study, we choose UMAP with components 10, as it empirically gives consistent performance across all three metrics. The 2D embedding space after clustering is shown in Figure 2 .",
"cite_spans": [
{
"start": 117,
"end": 134,
"text": "(Rousseeuw, 1987)",
"ref_id": "BIBREF29"
},
{
"start": 171,
"end": 200,
"text": "(Cali\u0144ski and Harabasz, 1974)",
"ref_id": "BIBREF6"
},
{
"start": 238,
"end": 264,
"text": "(Davies and Bouldin, 1979)",
"ref_id": "BIBREF11"
},
{
"start": 1264,
"end": 1286,
"text": "(McInnes et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 609,
"end": 621,
"text": "Figures 3, 4",
"ref_id": "FIGREF2"
},
{
"start": 1647,
"end": 1655,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Clustering Performance Evaluation",
"sec_num": "6"
},
{
"text": "To reduce the cluster size, we extract the most representative posts or exemplars of each cluster 8 . In technical terms, exemplars are the data points that lie at the heart of a cluster, around which the ultimate cluster forms. Table 2 provides the statistics of the clusters and their respective exemplar sizes. To find the key-words for each cluster, we employ a class-based term-frequency inversedocument-frequency (c-TF-IDF) 9 method on each of the obtained exemplars. Unlike the traditional TF-IDF, which considers each document of a corpus, c-TF-IDF is a class-based method that treats all the documents belonging to a particular class as a single document. This enables us to find only the latent topics most representative of a particular cluster and penalize the frequent words across the clusters. For each cluster, the c-TF-IDF score is calculated using the Equation 7, where each word t is extracted for each class i, and the number of documents m is divided by the total frequency of the word t across all classes n.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 236,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Clustering Performance Evaluation",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c \u2212 T F \u2212 IDF i = t i w i * log m n j=1 t j",
"eq_num": "(7)"
}
],
"section": "Clustering Performance Evaluation",
"sec_num": "6"
},
{
"text": "The coherence scores for the topics generated by LDA and the clusters generated by the abovementioned embeddings are reported in Table 3 . The top 15 nouns with the highest c-TF-IDF scores for each cluster are used to evaluate the topic coherence. The overall coherence of the topics generated is evaluated using extrinsic as well as intrinsic measures. As the intrinsic topic coherence is computed using word co-occurrences in documents from the corpus (Mimno et al., 2011) , it is natural that LDA reports the highest intrinsic coherence (c v), as it is based on word-occurrences statistics. Moreover, as argued by Stevens et al. (2012) guarantee that the generated topics make semantic sense or that they are interpretable by humans.",
"cite_spans": [
{
"start": 454,
"end": 474,
"text": "(Mimno et al., 2011)",
"ref_id": "BIBREF25"
},
{
"start": 617,
"end": 638,
"text": "Stevens et al. (2012)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Clustering Performance Evaluation",
"sec_num": "6"
},
{
"text": "Normalized pointwise mutual information (NPMI) (Bouma, 2009) , on the other hand, has been shown to correlated with human judgement (Lau et al., 2014) . Thus, in this study, we use NPMI as an extrinsic measure. Experiments by Sia et al. (2020) suggest that clustering contextual embeddings can result in topics with better NPMI compared to LDA. As evident from the results reported in Table 3 , our results corroborate these findings, as our proposed methodology exhibits the best NPMI score. ",
"cite_spans": [
{
"start": 47,
"end": 60,
"text": "(Bouma, 2009)",
"ref_id": "BIBREF4"
},
{
"start": 132,
"end": 150,
"text": "(Lau et al., 2014)",
"ref_id": "BIBREF18"
},
{
"start": 226,
"end": 243,
"text": "Sia et al. (2020)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 385,
"end": 392,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Clustering Performance Evaluation",
"sec_num": "6"
},
{
"text": "For a comprehensive qualitative evaluation, we select the top 100 posts from each clusters which exhibit the highest Jaccard Index scores 10 with that of the top 15 nouns with the highest c-TF-IDF scores for each cluster. The qualitative analysis is carried out by the subject expert in clinical psychology to draw out the major discussion themes. Total 7 themes were extracted from the 9 clusters generated. They are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "Abuse stories: Cluster 3 reveals self-disclosure stories of abuse that are physical, emotional, or sexual in nature. Abusers are mainly parents, siblings, close family members, co-workers, and known people. Narrations prominently highlight dysfunctional family dynamics during the victim's childhood as well as the issues related to current family interactions. Family dynamics are characterized by alcoholic/ substance abusive parents and abusers with pathological personality traits. Posts indicate traumatic experiences, the effect of these experiences, and how they act as triggers. The victim's exposure to the trauma was long and frequent. \"My mother was very controlling, scary, abusive and manipulative...\" \"I've been coming to terms over the past 6 months or so with the fact that my family caused me a significant amount of trauma growing up...\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "\"For as long as I can remember, my father was physically and emotionally abusive...\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "Flashbacks: The primary themes of cluster 6 include flashbacks of traumatic incidences and the emotional disturbances associated with them. Feelings of anxiety, panic, nightmares, fear, impulsivity, and anger are prevalent throughout the cluster. Narrations reveal the way flashbacks are impeding current life issues. Examples include an aversion to touch and sexual experiences, difficulties in romantic relationships, social relationships, daily chores, and work as well as triggering health-related symptoms. Working through flashbacks, ways of dealing with it, and help to overcome it are also shared.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "\"...struggling with flashbacks, nightmares, and intrusive thoughts about trauma. can anyone give me reassurance that i'll get through this?...\" \"...I can stay that way for weeks while I slowly process the old trauma that comes up. I'm in that state right now -just overwhelmed by intense emotional flashbacks...\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "Advice seeking: People have vividly expressed their feelings and are seeking advice about various issues related to PTSD, according to the posts in cluster 8. Main themes revolve around advice related to job functioning impacted by PTSD symptoms, social isolation, dependency issues, management of difficult emotions, exhaustion, and frustration. Posts in this cluster are brief and direct, seeking advice, help, and support from the Reddit community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "\"...I would appreciate any tips or advice. How do you deal with emotional isolation?\" \"When I'm more myself, I feel like I'm too cynical and rude. Anyone else have experience with this? How do you become more authentic? \" Therapy and therapist experiences: Cluster 1 and 4 mainly focuses on therapy and therapist experiences. Posts highlight the process of therapy, experiences with therapists, challenges faced in the therapeutic process, insights from therapies, transference experiences, and seeking help for such issues. Other findings include posts that seek therapeutic methods other than visiting therapists. Surprisingly, a large chunk of the posts portrays a negative connotation about therapy experiences or therapy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "\"I've been traumatized by therapists in my childhood that relayed info back to my abusive parents and gave them more ammo to hurt me with...\" \"The only therapist who is willing to treat me is unreliable, too far away, and frankly unsympathetic and unqualified...\" Difficulty in emotional regulation: Posts in cluster 5 primarily focus on different emotions associated with trauma. Posts suggest difficulty in controlling emotions, emotional neglect, panic, anxiety, feeling of emptiness, hopelessness, emotional distancing due to trauma, difficulty understanding and expressing emotions, and ambivalent emotions towards abusers. Other findings include the progress in trauma-related emotions. Most of the posts are help-seeking in nature, for validating their emotions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "\"DAE struggle with identifying or verbally explaining your emotions?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "\"I have recently realized that I have the hardest time identifying my emotions...\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "\"DAE struggle with guilt around cutting off toxic people?\" \"How did you overcome chronic emotional numbness? Inability and lack of desire to develop a connection with others (platonic or romantic)?\" \"DAE feel like they've built a wall between themselves and their feelings?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "Working through traumatic abuse: Post in cluster 2 narrates retrospective abusive, traumatic experiences and their psychological aftereffects. The majority of the posts indicate signs of shame, dissociation, social withdrawal, anxiety, self-downing, and victimization. Confrontation, triumph while struggling with vulnerabilities, hope, and new insights about oneself are other findings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "\"Mainly the reasons I self-isolate, distrust/avoid intimacy, and deprive myself of certain pleasures is due to my PTSD symptoms because of all the abuse I endured...\" \"The lifelong shame and fear, all the problems causing me to not succeed in life or have any healthy relationships...\" \"I haven't lived a day in my adult life. It's all been a decades-long dissociation.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "Abuse issues and general PTSD: Clusters 7 and 9 mostly contains posts about abuse stories and PTSD in general. The major themes are related to the parent-child issues, insecure parenting styles, emotional and physical abuse, the effect of childhood trauma, loss and grieving for not having healthy parental relations, the impact of insecure parenting, disclosure of trauma incidences, and sharing recovery tips. Following are a few illustrations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "\".. I hate the days where you long for the childhood that you never had and support you never received. For the belonging and \"home\" that you never had ...\" \"... My last therapy session had me divulging my experiences as a young child and what life was like for me back then...\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thematic Analysis of the Clusters",
"sec_num": "7"
},
{
"text": "Cluster structure is in accordance with the formal diagnostic criteria of PTSD and C-PTSD. Abuse stories reflected in cluster 1 corresponds to the criterion-A of PTSD in DSM-5 (Association et al., 2013) , that states the experience of trauma. Out of the six symptom cluster proffered by ICD-11 11 for CPTSD (re-experiencing, avoidance, hypervigilance, emotional dysregulation, interpersonal difficulties, and negative self-concept), cluster 6 adequately corresponds to re-experiencing of flashbacks, while cluster 5 is consistent with emotional dysregulation. Parent-child relationships portrayed in cluster 2 corresponds to etiological factors. Dysfunctional parent-child relation in PTSD has been widely confirmed in the clinical psychology literature (Cockram et al., 2010; Cross et al., 2018; van Ee et al., 2016) . On the other hand, clusters 5, 6, and 7 characterizing therapy experience, working through trauma, and advice, respectively, attribute to the treatments and interventions aspect of PTSD. In conclusion, it can be said that these clusters reveal salient features of PTSD.",
"cite_spans": [
{
"start": 170,
"end": 202,
"text": "DSM-5 (Association et al., 2013)",
"ref_id": null
},
{
"start": 754,
"end": 776,
"text": "(Cockram et al., 2010;",
"ref_id": "BIBREF8"
},
{
"start": 777,
"end": 796,
"text": "Cross et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 797,
"end": 817,
"text": "van Ee et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "8"
},
{
"text": "Although the clusters more or less convey their discrete independent themes, there exist many common topics running throughout the clusters. The recurrent expressions observed throughout the clusters pertain to personal and social life (family, parents, friends, person, relationships), daily grind (work, home, job, place), and temporal indicators (day, time, year, month, today, week). Furthermore, words encompassing cognition (thoughts, think, feel, feeling, lot, hard), emotional and affect expressions (happy, love, good, anxiety, kind, bad, hate), and inhibition expressions (avoid, deny, escape) are perpetual throughout the posts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "8"
},
{
"text": "Our work has its own drawbacks and limitations. The problem statement at hand is of fine-grained clustering rather than coarse-grained clustering, thus, making it difficult to draw out interpretable topics of discussion. This is attested by the low NPMI scores in Table 3 . Furthermore, the posts' lengthy nature (refer Table 1 ) makes it difficult for models like RoBERTa to capture maximum contextual information (Liu et al., 2019) . Many psychiatric disorders are found to be co-morbid with PTSD (Brady et al., 2000) . The research by indicates that people facing mental health issues often find it difficult to regulate their ideas and views. Their research further entails that posts from online mental health communities are more difficult to read and portray less lexical diversity. These factors further hinder the elicitation of interpretable topics from the corpus. Also, not all clusters exhibit independent themes of discussions and some themes are a combination of multiple clusters.",
"cite_spans": [
{
"start": 415,
"end": 433,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 494,
"end": 519,
"text": "PTSD (Brady et al., 2000)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 264,
"end": 271,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 320,
"end": 327,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Limitations",
"sec_num": "9"
},
{
"text": "In this research, we introduce topic-infused deep contextualized representations, a robust data representation methodology that successfully integrates topical information with contextualized embeddings. We collect and analyze large-scale data pertaining to the online discourse on Reddit, centered around PTSD and CPTSD, and perform densitybased clustering to draw out the prominent clusters and analyze the themes of discussion present in them. Despite the perpetual semantic and thematic similarities amongst the posts in the corpus, our methodology, to some extent, is able to draw out the underlying, fine-grained, latent clusters. The qualitative analysis of each cluster revealed the characteristic themes and salient features of PTSD and CPTSD, consistent with the clinical psychology literature. Through the lens of social media, this study delineates a deeper understanding of PTSD and C-PTSD, fostering further research in early detection of mental illnesses, identification of highrisk groups, enhanced mental health patient education programs, better diagnostic and therapeutic theory building, as well as an improved understanding of the underlying design of the online mental health communities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "10"
},
{
"text": "The future work of this research can take multiple directions. From an NLP standpoint, the robustness of the proposed methodology should be examined by testing them on various other benchmark NLP tasks such as semantic textual similarity, word analogy, and text classification, to name a few. Other variants of autoencoders and objective losses could be employed to facilitate tighter integration of topical information with the contextualized embeddings. From the mental health and clinical psychology perspective, such research can be easily extended to other online mental health communities to draw useful insights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "10"
},
{
"text": "https://www.reddit.com/r/ptsd/ 2 https://www.reddit.com/r/CPTSD/ 3 https://pushshift.io/ 4 https://pypi.org/project/pycld2/0.24/ 5 https://pypi.org/project/ pycontractions/ The dataset statistics are provided inTable 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "selftext attribute refers to the main body of the post.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://radimrehurek.com/gensim/ models/wrappers/ldamallet.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://hdbscan.readthedocs.io/en/ latest/soft_clustering_explanation.html 9 https://github.com/MaartenGr/cTFIDF",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://en.wikipedia.org/wiki/ Jaccard_index",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://icd.who.int/en",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Diagnostic and statistical manual of mental disorders (DSM-5\u00ae)",
"authors": [
{
"first": "",
"middle": [],
"last": "American Psychiatric Association",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "American Psychiatric Association et al. 2013. Diag- nostic and statistical manual of mental disorders (DSM-5\u00ae). American Psychiatric Pub.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multitask learning for mental health conditions with limited social media data",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Benton",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "152--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adrian Benton, Margaret Mitchell, and Dirk Hovy. 2017. Multitask learning for mental health condi- tions with limited social media data. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 152-162, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Michael I Jordan",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of machine Learning research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of ma- chine Learning research, 3(Jan):993-1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning word meta-embeddings by autoencoding",
"authors": [
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "Cong",
"middle": [],
"last": "Bao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1650--1661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danushka Bollegala and Cong Bao. 2018. Learning word meta-embeddings by autoencoding. In Pro- ceedings of the 27th International Conference on Computational Linguistics, pages 1650-1661, Santa Fe, New Mexico, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Normalized (pointwise) mutual information in collocation extraction",
"authors": [
{
"first": "Gerlof",
"middle": [],
"last": "Bouma",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of GSCL",
"volume": "",
"issue": "",
"pages": "31--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. Proceedings of GSCL, pages 31-40.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Comorbidity of psychiatric disorders and posttraumatic stress disorder",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kathleen",
"suffix": ""
},
{
"first": "Therese",
"middle": [
"K"
],
"last": "Brady",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Sylvia",
"middle": [],
"last": "Brewerton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lucerini",
"suffix": ""
}
],
"year": 2000,
"venue": "The Journal of clinical psychiatry",
"volume": "",
"issue": "7",
"pages": "22--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen T Brady, Therese K Killeen, Tim Brewer- ton, and Sylvia Lucerini. 2000. Comorbidity of psy- chiatric disorders and posttraumatic stress disorder. The Journal of clinical psychiatry, 61(suppl 7):22- 32.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A dendrite method for cluster analysis",
"authors": [
{
"first": "T",
"middle": [],
"last": "Cali\u0144ski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harabasz",
"suffix": ""
}
],
"year": 1974,
"venue": "Communications in Statistics",
"volume": "3",
"issue": "1",
"pages": "1--27",
"other_ids": {
"DOI": [
"10.1080/03610927408827101"
]
},
"num": null,
"urls": [],
"raw_text": "T. Cali\u0144ski and J Harabasz. 1974. A dendrite method for cluster analysis. Communications in Statistics, 3(1):1-27.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "What's all the talk about? topic modelling in a mental health internet support group",
"authors": [
{
"first": "Bradley",
"middle": [],
"last": "Carron-Arthur",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Reynolds",
"suffix": ""
},
{
"first": "Kylie",
"middle": [],
"last": "Bennett",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Bennett",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"M"
],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2016,
"venue": "BMC psychiatry",
"volume": "16",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/s12888-016-1073-5"
]
},
"num": null,
"urls": [],
"raw_text": "Bradley Carron-Arthur, Julia Reynolds, Kylie Bennett, Anthony Bennett, and Kathleen M Griffiths. 2016. What's all the talk about? topic modelling in a men- tal health internet support group. BMC psychiatry, 16(1):367.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Role and treatment of early maladaptive schemas in vietnam veterans with ptsd",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cockram",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher W",
"middle": [],
"last": "Drummond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2010,
"venue": "Clinical Psychology & Psychotherapy: An International Journal of Theory & Practice",
"volume": "17",
"issue": "3",
"pages": "165--182",
"other_ids": {
"DOI": [
"10.1002/cpp.690"
]
},
"num": null,
"urls": [],
"raw_text": "David M Cockram, Peter D Drummond, and Christo- pher W Lee. 2010. Role and treatment of early maladaptive schemas in vietnam veterans with ptsd. Clinical Psychology & Psychotherapy: An Interna- tional Journal of Theory & Practice, 17(3):165-182.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Trauma exposure, ptsd, and parenting in a community sample of low-income, predominantly african american mothers and children",
"authors": [
{
"first": "Dorthie",
"middle": [],
"last": "Cross",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Vance",
"suffix": ""
},
{
"first": "Ye",
"middle": [
"Ji"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"L"
],
"last": "Ruchard",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Fox",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Jovanovic",
"suffix": ""
},
{
"first": "Bekh",
"middle": [],
"last": "Bradley",
"suffix": ""
}
],
"year": 2018,
"venue": "Psychological trauma: theory, research, practice, and policy",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1037/tra0000264"
]
},
"num": null,
"urls": [],
"raw_text": "Dorthie Cross, L Alexander Vance, Ye Ji Kim, An- drew L Ruchard, Nathan Fox, Tanja Jovanovic, and Bekh Bradley. 2018. Trauma exposure, ptsd, and parenting in a community sample of low-income, predominantly african american mothers and chil- dren. Psychological trauma: theory, research, prac- tice, and policy, 10(3):327.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Latent sentiment topic modelling and nonparametric discovery of online mental healthrelated communities",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Dao",
"suffix": ""
},
{
"first": "Thin",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Svetha",
"middle": [],
"last": "Venkatesh",
"suffix": ""
},
{
"first": "Dinh",
"middle": [],
"last": "Phung",
"suffix": ""
}
],
"year": 2017,
"venue": "International Journal of Data Science and Analytics",
"volume": "4",
"issue": "3",
"pages": "209--231",
"other_ids": {
"DOI": [
"10.1007/s41060-017-0073-y"
]
},
"num": null,
"urls": [],
"raw_text": "Bo Dao, Thin Nguyen, Svetha Venkatesh, and Dinh Phung. 2017. Latent sentiment topic modelling and nonparametric discovery of online mental health- related communities. International Journal of Data Science and Analytics, 4(3):209-231.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A cluster separation measure",
"authors": [
{
"first": "D",
"middle": [
"L"
],
"last": "Davies",
"suffix": ""
},
{
"first": "D",
"middle": [
"W"
],
"last": "Bouldin",
"suffix": ""
}
],
"year": 1979,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "",
"issue": "",
"pages": "224--227",
"other_ids": {
"DOI": [
"10.1109/TPAMI.1979.4766909"
]
},
"num": null,
"urls": [],
"raw_text": "D. L. Davies and D. W. Bouldin. 1979. A cluster sepa- ration measure. IEEE Transactions on Pattern Anal- ysis and Machine Intelligence, PAMI-1(2):224-227.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Mental health discourse on reddit: Self-disclosure, social support, and anonymity",
"authors": [
{
"first": "Sushovan",
"middle": [],
"last": "Munmun De Choudhury",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "De",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Munmun De Choudhury and Sushovan De. 2014. Men- tal health discourse on reddit: Self-disclosure, social support, and anonymity. Proceedings of the Interna- tional AAAI Conference on Web and Social Media, 8(1).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Parental ptsd, adverse parenting and child attachment in a refugee sample",
"authors": [
{
"first": "Rolf",
"middle": [
"J"
],
"last": "Elisa Van Ee",
"suffix": ""
},
{
"first": "Marian",
"middle": [
"J"
],
"last": "Kleber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jongmans",
"suffix": ""
},
{
"first": "T",
"middle": [
"M"
],
"last": "Trudy",
"suffix": ""
},
{
"first": "Dorothee",
"middle": [],
"last": "Mooren",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Out",
"suffix": ""
}
],
"year": 2016,
"venue": "Attachment & human development",
"volume": "18",
"issue": "3",
"pages": "273--291",
"other_ids": {
"DOI": [
"10.1080/14616734.2016.1148748"
]
},
"num": null,
"urls": [],
"raw_text": "Elisa van Ee, Rolf J Kleber, Marian J Jongmans, Trudy TM Mooren, and Dorothee Out. 2016. Parental ptsd, adverse parenting and child attach- ment in a refugee sample. Attachment & human de- velopment, 18(3):273-291.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Detection of mental health from Reddit via deep contextualized representations",
"authors": [
{
"first": "Zhengping",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Ita Levitan",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Zomick",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis",
"volume": "",
"issue": "",
"pages": "147--156",
"other_ids": {
"DOI": [
"10.18653/v1/2020.louhi-1.16"
]
},
"num": null,
"urls": [],
"raw_text": "Zhengping Jiang, Sarah Ita Levitan, Jonathan Zomick, and Julia Hirschberg. 2020. Detection of mental health from Reddit via deep contextualized represen- tations. In Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis, pages 147-156, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Multi-task, multi-channel, multi-input learning for mental illness detection using social media text",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Prasadith Kirinde Gamaarachchige",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)",
"volume": "",
"issue": "",
"pages": "54--64",
"other_ids": {
"DOI": [
"10.18653/v1/D19-6208"
]
},
"num": null,
"urls": [],
"raw_text": "Prasadith Kirinde Gamaarachchige and Diana Inkpen. 2019. Multi-task, multi-channel, multi-input learn- ing for mental illness detection using social media text. In Proceedings of the Tenth International Work- shop on Health Text Mining and Information Analy- sis (LOUHI 2019), pages 54-64, Hong Kong. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "530--539",
"other_ids": {
"DOI": [
"10.3115/v1/E14-1056"
]
},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, pages 530-539, Gothenburg, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Cross-cultural differences in language markers of depression online",
"authors": [
{
"first": "Kate",
"middle": [],
"last": "Loveys",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Torrez",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Fine",
"suffix": ""
},
{
"first": "Glen",
"middle": [],
"last": "Moriarty",
"suffix": ""
},
{
"first": "Glen",
"middle": [],
"last": "Coppersmith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic",
"volume": "",
"issue": "",
"pages": "78--87",
"other_ids": {
"DOI": [
"10.18653/v1/W18-0608"
]
},
"num": null,
"urls": [],
"raw_text": "Kate Loveys, Jonathan Torrez, Alex Fine, Glen Mori- arty, and Glen Coppersmith. 2018. Cross-cultural differences in language markers of depression on- line. In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, pages 78-87, New Or- leans, LA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Rectifier nonlinearities improve neural network acoustic models",
"authors": [
{
"first": "Andrew",
"middle": [
"L"
],
"last": "Maas",
"suffix": ""
},
{
"first": "Awni",
"middle": [
"Y"
],
"last": "Hannun",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "ICML Workshop on Deep Learning for Audio, Speech and Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. 2013. Rectifier nonlinearities improve neural net- work acoustic models. In in ICML Workshop on Deep Learning for Audio, Speech and Language Pro- cessing.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "More efficient topic modelling through a noun only approach",
"authors": [
{
"first": "Fiona",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Australasian Language Technology Association Workshop",
"volume": "",
"issue": "",
"pages": "111--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fiona Martin and Mark Johnson. 2015. More effi- cient topic modelling through a noun only approach. In Proceedings of the Australasian Language Tech- nology Association Workshop 2015, pages 111-115, Parramatta, Australia.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "hdbscan: Hierarchical density based clustering",
"authors": [
{
"first": "Leland",
"middle": [],
"last": "Mcinnes",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Healy",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Astels",
"suffix": ""
}
],
"year": 2017,
"venue": "The Journal of Open Source Software",
"volume": "2",
"issue": "11",
"pages": "",
"other_ids": {
"DOI": [
"10.21105/joss.00205"
]
},
"num": null,
"urls": [],
"raw_text": "Leland McInnes, John Healy, and Steve Astels. 2017. hdbscan: Hierarchical density based clustering. The Journal of Open Source Software, 2(11).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Umap: Uniform manifold approximation and projection for dimension reduction",
"authors": [
{
"first": "Leland",
"middle": [],
"last": "Mcinnes",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Healy",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Melville",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.03426"
]
},
"num": null,
"urls": [],
"raw_text": "Leland McInnes, John Healy, and James Melville. 2018. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Optimizing semantic coherence in topic models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Edmund",
"middle": [],
"last": "Talley",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Leenders",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "262--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 262-272, Edinburgh, Scotland, UK. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Harnessing reddit to understand the written-communication challenges experienced by individuals with mental health disorders: Analysis of texts from mental health communities",
"authors": [
{
"first": "Albert",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Conway",
"suffix": ""
}
],
"year": 2018,
"venue": "J Med Internet Res",
"volume": "20",
"issue": "4",
"pages": "",
"other_ids": {
"DOI": [
"10.2196/jmir.8219"
]
},
"num": null,
"urls": [],
"raw_text": "Albert Park and Mike Conway. 2018. Harness- ing reddit to understand the written-communication challenges experienced by individuals with men- tal health disorders: Analysis of texts from men- tal health communities. J Med Internet Res, 20(4):e121.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Examining thematic similarity, difference, and membership in three online mental health communities from reddit",
"authors": [
{
"first": "Albert",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Conway",
"suffix": ""
},
{
"first": "Annie",
"middle": [
"T"
],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "Comput. Hum. Behav",
"volume": "78",
"issue": "",
"pages": "98--112",
"other_ids": {
"DOI": [
"10.1016/j.chb.2017.09.001"
]
},
"num": null,
"urls": [],
"raw_text": "Albert Park, Mike Conway, and Annie T. Chen. 2018. Examining thematic similarity, difference, and mem- bership in three online mental health communities from reddit. Comput. Hum. Behav., 78(C):98-112.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "2020. tBERT: Topic models and BERT joining forces for semantic similarity detection",
"authors": [
{
"first": "Nicole",
"middle": [],
"last": "Peinelt",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7047--7055",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.630"
]
},
"num": null,
"urls": [],
"raw_text": "Nicole Peinelt, Dong Nguyen, and Maria Liakata. 2020. tBERT: Topic models and BERT joining forces for semantic similarity detection. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 7047-7055, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Silhouettes: A graphical aid to the interpretation and validation of cluster analysis",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Rousseeuw",
"suffix": ""
}
],
"year": 1987,
"venue": "J. Comput. Appl. Math",
"volume": "20",
"issue": "1",
"pages": "53--65",
"other_ids": {
"DOI": [
"10.1016/0377-0427(87)90125-7"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Rousseeuw. 1987. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math., 20(1):53-65.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Introduction to information retrieval",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Raghavan",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "39",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze, Christopher D Manning, and Prab- hakar Raghavan. 2008. Introduction to information retrieval, volume 39. Cambridge University Press Cambridge.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Adapting deep learning methods for mental health prediction on social media",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Sekulic",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)",
"volume": "",
"issue": "",
"pages": "322--327",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5542"
]
},
"num": null,
"urls": [],
"raw_text": "Ivan Sekulic and Michael Strube. 2019. Adapting deep learning methods for mental health prediction on so- cial media. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 322-327, Hong Kong, China. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Tired of topic models? clusters of pretrained word embeddings make for fast and good topics too!",
"authors": [
{
"first": "Suzanna",
"middle": [],
"last": "Sia",
"suffix": ""
},
{
"first": "Ayush",
"middle": [],
"last": "Dalmia",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1728--1736",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.135"
]
},
"num": null,
"urls": [],
"raw_text": "Suzanna Sia, Ayush Dalmia, and Sabrina J. Mielke. 2020. Tired of topic models? clusters of pretrained word embeddings make for fast and good topics too! In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1728-1736, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Online social networks in health care: A study of mental disorders on reddit",
"authors": [
{
"first": "B",
"middle": [],
"last": "Fraga",
"suffix": ""
},
{
"first": "A",
"middle": [
"P"
],
"last": "Couto Da Silva",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Murai",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI)",
"volume": "",
"issue": "",
"pages": "568--573",
"other_ids": {
"DOI": [
"10.1109/WI.2018.00-36"
]
},
"num": null,
"urls": [],
"raw_text": "B. Silveira Fraga, A. P. Couto da Silva, and F. Mu- rai. 2018. Online social networks in health care: A study of mental disorders on reddit. In 2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI), pages 568-573.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Exploring topic coherence over many models and many topics",
"authors": [
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Kegelmeyer",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Andrzejewski",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Buttler",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "952--961",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keith Stevens, Philip Kegelmeyer, David Andrzejew- ski, and David Buttler. 2012. Exploring topic coher- ence over many models and many topics. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, pages 952- 961, Jeju Island, Korea. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Topic modeling with contextualized word representation clusters",
"authors": [
{
"first": "Laure",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.12626"
]
},
"num": null,
"urls": [],
"raw_text": "Laure Thompson and David Mimno. 2020. Topic mod- eling with contextualized word representation clus- ters. arXiv preprint arXiv:2010.12626.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Social media mining to understand public mental health",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Toulis",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Golab",
"suffix": ""
}
],
"year": 2017,
"venue": "VLDB Workshop on Data Management and Analytics for Medicine and Healthcare",
"volume": "",
"issue": "",
"pages": "55--70",
"other_ids": {
"DOI": [
"10.1007/978-3-319-67186-4_6"
]
},
"num": null,
"urls": [],
"raw_text": "Andrew Toulis and Lukasz Golab. 2017. Social media mining to understand public mental health. In VLDB Workshop on Data Management and Analytics for Medicine and Healthcare, pages 55-70. Springer.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion",
"authors": [
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Lajoie",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Pierre-Antoine",
"middle": [],
"last": "Manzagol",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research",
"volume": "11",
"issue": "110",
"pages": "3371--3408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local de- noising criterion. Journal of Machine Learning Re- search, 11(110):3371-3408.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Architecture of the proposed system. Here, mn refers to masking noise applied to the autoencoder inputs."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "2D embdding space of the topic-infused deep contextualized representations (-1 represents the unassigned datapoints).and d 3 . The concatenation of each of the encoded input source embeddings results in the document meta-embedding m(p), as given by Equation 1."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Silhoutte Coefficient (SC) Comparison Figure 4: Calinski Harabasz Index (CHI) Comparison"
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Davies Bouldin Index (DBI) Comparison cent et al., 2010). Leaky rectified linear (Leaky ReLU)"
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Figure 6 depicts the wordclouds of the top 50 nouns with the highest c-TF-IDF scores for each cluster."
},
"FIGREF5": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Wordclouds of top 50 c-TF-IDF nouns for the 9 clusters."
},
"TABREF2": {
"content": "<table/>",
"text": "Cluster and respective exemplar sizes.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table/>",
"text": "Topic coherence results.",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}